A system which is not aware of what it is doing and does not have some awareness of itself cannot have human-level intelligence. People will expect an intelligent robot to have some level of consciousness. The perspective of the thesis is that it is both necessary and possible for a system to demonstrate at least some aspects of consciousness, to achieve human-level AI.

However, the thesis does not claim AI systems will achieve the subjective experience humans have of consciousness. This is further discussed below.

The thesis adapts the “axioms of being conscious” proposed by Aleksander and Morton (2007). To claim a system achieves ‘artificial consciousness’ it should demonstrate:


  • Observation of an external environment.
  • Observation of itself in relation to the external environment.
  • Observation of internal thoughts.
  • Observation of time: of the present, the past, and potential futures.
  • Observation of hypothetical or imaginative thoughts.
  • Reflective observation: Observation of having observations.


To observe these things, a TalaMind system should support representations of them, and support processing such representations. The prototype system illustrates how a TalaMind architecture could support artificial consciousness.

The ‘Hard Problem’ of consciousness (Chalmers 1995) is the problem of explaining the human first-person, subjective experience of consciousness. For the TalaMind approach, there is the theoretical issue of whether a Tala agent having artificial consciousness can have this first-person, subjective experience. This is a difficult, perhaps metaphysically unsolvable problem because science relies on third-person explanations, based on observations. Since there is no philosophical or scientific consensus about the Hard Problem, the thesis may not give an answer that will satisfy everyone. Yet the TalaMind approach – implementing artificial consciousness, ala (Aleksander and Morton, 2007) – is open to different answers for the problem, as discussed in thesis §4.2.7.

The human first-person subjective experience of consciousness is richer and more complex than artificial consciousness. For example, in addition to thoughts human consciousness includes emotions. While it will be important for an intelligent robot to have some understanding of human emotions, one of the values of human-level artificial intelligence is likely to be its objectivity and not being affected by some emotions. From the perspective of the thesis it is not required that a human-level AI system be able to experience human emotions. Questions related to sociality, emotions and values are more difficult and at a higher level than the focus of the TalaMind thesis, and are topics for future research.

Consciousness

TalaMind LLC