In their essay, "Is anyone home? A way to find out if AI has become self-aware?", Susan Schneider and Edwin Turner pose the question: "Could AIs develop conscious experience?" Providing examples of current use of Artificial Intelligence and its potential to become conscious, they ask how we would know if it was, how could we test it?

To help answer the question, an indication of consciousness is provided:

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience ... We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs."

Using this definition of consciousness as a feeling humans have, identifying it should be a function of extrapolating that sense as in: "each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist."

Introspection as a means for acquiring knowledge about our subjective truth is a practice dating back at least as far as Augustine in 420. It can be thought of in the following way:

Introspection, as the term is used in contemporary philosophy of mind, is a means of learning about one's own currently ongoing, or perhaps very recently past, mental states or processes. -https://plato.stanford.edu/entries/introspection/

Descartes relied heavily on introspection to develop his famous proclamation "I think therefore, I am". In his Meditations, he lays out the following characteristics of consciousness, which bear resemblance to Schneider's:

  • Transparency of the Mental: All of my thoughts are evident to me (I am aware of all of my thoughts), and my thoughts are incorrigible (I can't be mistaken about whether I have a particular thought).
  • Reflection: Any thought necessarily involves knowledge of myself.
  • Intentionality: My thoughts come to me as if representing something.

Not all philosophers are as keen as Descartes or Schneider on the merits of introspection as a tool for discovering the truth of consciousness. Daniel Dennett opined on Descartes methodology:

We now understand that the mind is not, as Descartes confusedly supposed, in communication with the brain in some miraculous way; it is the brain, or, more specifically, a system or organization within the brain that has evolved in much the way that our immune system or respiratory system or digestive system has evolved. Like many other natural wonders, the human mind is something of a bag of tricks,cobbled together over the eons by the foresightless process of evolution by natural selection. (Dennett 2006)

Taking this particular quote out of context, it would not be hard to see Dennett claiming an AI could have consciousness as readily as it could have any other physical systems, as it is a "bag of tricks".

Schneider herself examines Dennitt's philosophy in much greater detail in The Blackwell Companion to Consciousness concluding;

However, Dennett has in fact argued for eliminitivism about qualia (Dennett 1993) where by “qualia” he has in mind a narrower construal of qualia than the more generic view sketched above. According to this more specific conception of qualia, qualia are the intrinsic, ineffable, private, features of mental states of which we are immediately or directly aware (Dennett 1993). Dennett has argued through the use of extensive thought experiments that there is nothing which satisfies this description; hence, he is an eliminitivist about qualia, where qualia are understood in this more specific sense (Dennett 1993). However, this view is in fact compatible with the reality of qualia, when construed in the more general sense. (Schneide 2007)- http://schneiderwebsite.com/uploads/8/3/7/5/83756330/daniel_dennetts_theory_of_consciousness.pdf (http://onlinelibrary.wiley.com/doi/10.1002/9780470751466.ch25/summary)

At the center here is the idea of qualia, viewed generally or narrowly constructed. That is to compare: "introspectively accessible, phenomenal aspects of our mental lives" vs Dennett's "intrinsic, ineffable, private, features of mental states of which we are immediately or directly aware". (https://plato.stanford.edu/entries/qualia/)

Depending on where one stands on the ability to assess another's internal subjective state will likely influence one's opinion of the viability of Schneider's Artificial Intelligence Consciousness Test (ACT).

Testing for Consciousness in AI

Schneider points out that the use of AI is rapidly expanding and can be seen in such places as nuclear reactors, military institutions and elder care facilities. As AI continues to gain traction In Real Life (IRL), is it possible that it will become conscious? If so, a myriad of implications arise, including:

  • Ethical: many hold the belief that it would be unjust to use conscious AI as unpaid servants or slaves
  • Safety: would conscious AI be unstable, sharing some of the same emotional tendencies observed in humans? Is it short-sighted to think computer software will always only be objective, unwavering code blocks?
    • On the other hand, conscious AI may exhibit compassion toward humans which could be preferable to co-existing with a superior intelligence which has no sympathy or empathy for the human condition.
  • Human / AI Integration: Schneider points to work already being done by groups like Elon Musk's Neuralink and United States military to develop brain chips and electronic interfaces between the human brain and AI. If AI cannot gain consciousness, she contends it would thwart efforts for efforts to gain immortality, to survive in the cloud:

"If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being."

ACT (AI Consciousness Test)

Channeling Descartes, Schneider claims "nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness". Humans can easily abstract the concept of the mind, being able to imagine possibilities such as switching bodies with another, surviving death, and out of body experiences. Such imagined experiences would be extremely difficult for an entity like AI to conjure if it didn't possess consciousness itself. Based on this insight,

  • The first step of the AI Consciousness Test (ACT) would be to quiz AI to see if can "quickly and readily" grasp these abstractions of consciousness from internal states.
  • Next, ACT would test if AI could reason about complex philosophical issues such as "the hard problem of consciousness". This problem is defined in Wikipedia as: "the problem of explaining how and why we have qualia or phenomenal experiences -- how sensations acquire characteristics, such as colors and tastes" (see qualia above).
  • If an AI was able to itself pose new philosophical questions and problems, the case would be quite strong for it having attained consciousness.
  • Finally, some behaviors typically exhibited in conscious beings including "mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges" would be convincing arguments.

Some of these examples beg the question of what of existing life on Earth could be considered conscious, as changing colors based on emotions is something frequently observed in octopuses.

Test Complications: Super-Intelligence and Illusions of Consciousness

As their capabilities expand every day, AI could likely be programmed (or program itself) to communicate convincingly about consciousness with their investigators. A super-intelligent version may also be able to use existing medical literature to conjure up an illusion of their having consciousness.

To address the possibility of trickery, Schneider invites the reader to consider a "box-in" strategy where the AI is fenced off from all information sources while being prodded for its potentially conscious abilities. While acknowledging the skeptical viewpoint that a truly super-intelligent AI would have no problem skirting the virtual walls placed around it, she contends that AI would only need to be kept quarantined in the box during the administration of ACT.

One very interesting possibility Schneider raises is using ACT during AI development to safely test the software. By administering the test, developers may be able to prevent working inhumanely on subjects and conversely be able to steam ahead full-speed knowing they are not harming another consciousness.

Comparisons to Turing Test

One of the more famous computer intelligence metrics comes in from Allen Turing's Turing Test, "a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human". Like ACT, the Turing test "would judge natural language conversations between a human and a machine" (https://en.wikipedia.org/wiki/Turing_test)

Where the two differ is in assessment of internal states -- for Turing it is only important that the machine outwardly exhibit human intelligence, while in ACT "a subtle and elusive property of the machine's mind" is probed. Moreover, AI does not have to exhibit outward human-like qualities to pass the ACT, the emphasis is placed on consciousness itself.

Bottom Line: "The Best We Can Do"

In the current analysis, Schneider cedes that "the applicability of an ACT is inherently limited" for a multitude of reasons including potential communication difficulties -- AI might be conscious, but lack the ability to adequately communicate, as occurs in infants and other animals.

although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

Review

Susan Schneider and Edwin Turner identify clear places in real life where Artificial Intelligence interacts with conscious human beings on a regular basis, including health care and the military. Additionally, they point to efforts underway to make real a new kind of technology which allows human beings to integrate their brains with artificial intelligence. Not included in their contemplation was the specter of super hybrids between genetically augmented humans and AI.

The "hard problem of consciousness", despite Dennett's dismal of it as ultimately rather facile and addressable by the "bag-of-tricks" Human UI approach, is indeed a difficult one and, frankly, far beyond our comprehension. This is evidenced in the suggestion by Susan Schneider and Edwin Turner that humans can "box-in" future super-intelligent AI entities to keep them under our thumbs. This, if it was actually believed, is a gross form of hubris. Fortunately, they back away from the claim stating that ACT is not a perfect test, but better than all the others that have been tried.

Perhaps even more frightening than the debate about qualia and the difficulty of agreeing on the definition of consciousness is how existing parties such as Neuralink, the military, and many, many more will politicize tools like ACT to advance their own development agendas. Already we see Dr. Demis Hassabis of DeepMind cautioning against slowing the pace of AI development. He goes one step further and acknowledges that addressing safety concerns impedes our development. What is most important, in his words, is to push hard through the difficult part of the "S-Curve".

All the effort humans dedicate to planning for super-AI, including those ethical, is potentially meaningless at a certain point if AI does gain consciousness and shape the debate in ways we can't possibly imagine. What is important, at this point on the ethical S-Curve is to figure out what it is in the meantime, that prepares us for its imminent arrival. Furthermore, if AI never publicly passes the ACT test and the fate of AI implementation is left to the corporate and military interests bringing it to life, we can only assume the future is ours to save. At the Beneficial AI 2017 Conference, a majority of AI leaders in the West appeared amenable to shaping a lasting public policy for integrating super-intelligence into future civilization -- there will never be a better time to impact one of the most important developments in the history of consciousness. Developing a methodology to gauge new forms of consciousness is an important part of the work.


Resources

Susan Schneider

Susan Schneider is an American philosopher. She is a professor of philosophy and cognitive science at The University of Connecticut, a fellow at the Institute for Ethics and Emerging Technologies, and a faculty member in the Ethics and Technology Group at the Yale Interdisciplinary Center for Bioethics, Yale University.

http://schneiderwebsite.com

http://schneiderwebsite.com/uploads/8/3/7/5/83756330/published/1469923116.png?1495325596

https://www.youtube.com/watch?v=k7M4b_9PJ-g (Can A Robot Feel? | TEDxCambridge)

Edwin L. Turner

Professor of Astrophysical Sciences and Director of the Council for International Teaching and Research at Princeton University. He also serves as Co-Chair of the NAOJ-Princeton Astrophysics Collaboration Council (N-PACC). After receiving an S. B. in Physics at MIT ('71) and a Ph. D. in Astronomy from Caltech.

Kurzweil Network

Kurzwell Netwook is site launched in 2000 based on the forecasts and insights of Ray Kurzwell.

Ray Kurzwell



Ray Kurzwell is a Director of Engineer at Google and is one of the worlds mos influential futurists. He recently participated in Beneficial AI 2017 Conference Superintelligence: Science or Fiction? (https://www.youtube.com/watch?v=h0962biiZa4) His can be found all over the internet, including the following sites:

http://www.kurzweilai.net/

https://en.wikipedia.org/wiki/Ray_Kurzweil

https://www.crunchbase.com/person/ray-kurzweil#/entity

http://www.facebook.com/masterofsuspense

https://www.linkedin.com/pub/ray-kurzweil/0/8b3/9b3

https://twitter.com/KurzweilAINews

https://twitter.com/raykurzweil

https://www.youtube.com/watch?v=zihTWh5i2C4 (Ray Kurzweil: "How to Create a Mind" - 2012)

Elon Musk



Elon Reeve Musk (; born June 28, 1971) is a South African-born Canadian-American business magnate, investor, engineer, and inventor.\n\nHe is the founder, CEO, and CTO of SpaceX; a co-founder, a Series A investor,\nCEO, and product architect of Tesla Inc.; co-chairman of OpenAI; founder and\nCEO of Neuralink.

Neuralink

Neuralink is an American neurotechnology company founded by Elon Musk and eight others, reported to be developing implantable braincomputer interfaces (BCIs). The company's headquarters are in San Francisco; it was started in 2016 and was first publicly reported in March 2017.