TalaMindLLC

A system which is not aware of what it is doing and does not have some awareness of itself cannot have human-level intelligence. People will expect an intelligent robot to have such awareness. The perspective of the TalaMind approach is that it is both necessary and possible for a system to demonstrate at least some aspects of consciousness, to achieve human-level AI. However, it is not claimed AI systems will achieve the subjective experience humans have of consciousness. This is further discussed below.

The TalaMind approach adapts the “axioms of being conscious” proposed by Aleksander and Morton (2007). To claim a system achieves ‘artificial consciousness’ it should demonstrate:


  • Observation of an external environment.
  • Observation of itself in relation to the external environment.
  • Observation of internal thoughts.
  • Observation of time: of the present, the past, and potential futures.
  • Observation of hypothetical or imaginative thoughts.
  • Reflective observation: Observation of having observations.


To observe these things, a TalaMind system should support representations of them, and support processing such representations. The prototype system illustrates how a TalaMind architecture could support artificial consciousness.

The ‘Hard Problem’ of consciousness (Chalmers 1995) is the problem of explaining the human first-person, subjective experience of consciousness. For the TalaMind approach, there is the theoretical issue of whether a Tala agent having artificial consciousness can have this first-person, subjective experience. This is a difficult, perhaps metaphysically unsolvable problem because science relies on second- and third-person explanations, based on observations. Since there is no philosophical or scientific consensus about the Hard Problem, the thesis may not give an answer that will satisfy everyone. Yet the TalaMind approach – implementing artificial consciousness – is open to different answers for the problem.

The human first-person subjective experience of consciousness is richer and more complex than artificial consciousness. For example, in addition to thoughts human consciousness includes emotions. While it will be important for an intelligent robot to have some understanding of human emotions, one of the values of human-level artificial intelligence is likely to be its objectivity and not being affected by some emotions. Questions related to sociality, emotions and values are important topics for future research, relevant to achieving beneficial human-level AI.

Consciousness