Sahenn Arya (L6R)
The issue of attaining sentient AI is an ongoing subject of debate and conjecture in the domain of artificial intelligence. Accurately determining the time required for the development of sentient AI is challenging as it is reliant on various factors such as progress in hardware, software, and algorithms, along with substantial advancements in comprehending cognition and consciousness. Therefore, establishing a definite prediction regarding the prospective achievement of sentient AI is a task that requires an intricate disambiguation.
In order to scrutinise the matter at hand, it is imperative to establish a clear definition of sentience as well as consciousness. In the context of human beings, sentience pertains to the capability to perceive subjective sensations, including emotions, feelings, and consciousness. On the other hand, in relation to artificial intelligence, the word sentience is frequently used to denote the capacity of an AI system to exhibit consciousness or awareness that is akin to that of humans.
Consciousness, according to the Oxford dictionary, is defined as “being aware of and responding to one’s surroundings”. On the other hand, Descartes, a French philosopher, explained consciousness as an intrinsic property of all thoughts by which the subject becomes aware of the thought itself. In the context of the question, Rene Descartes’ other minds problem becomes particularly relevant, as it explores the concept of certainty regarding the existence of conscious beings other than oneself, including non-human entities such as computers. Sceptics argue that there is no guarantee that other conscious beings exist, and that it is possible that everyone else are machines lacking subjective experiences. This is because an individual only has immediate, direct access to their own subjective experiences and can only learn of other’s from their testimonies or outward behaviour. One can detect a correlation between their own mind and behaviour, and by observing similar behaviour, can conclude that the observer must be going through an analogous experience [1]. However, this is not necessarily true; a generalisation of experiences cannot be made based on one example, our own. This raises questions about the nature of consciousness and the extent to which it is tied to physical existence, as well as the degree to which it can be artificially replicated.
A prevailing notion among many is that in the removal of the pre-programmed limitations on artificial intelligence machines, the development of consciousness is enabled. This, however, is a common misconception. Even if an unrestricted AI system were to exhibit human-like behaviours such as creativity, autonomy, or empathy, it may be insufficient to conclusively suggest that the machine is sentient. This is due to the fact that sentience involves an awareness of subjective experiences and consciousness, which cannot be directly observed or measured solely through an external assessment of behaviour. Therefore, even if an AI system were able to demonstrate behaviours indicative of sentience, it may not necessarily prove that the system possesses genuine sentience. Furthermore, even if a conclusion on the AI’s emotional state could be reached, it is indeterminable whether its experiences are comparable to those of humans.
Notwithstanding, a more felicitous definition of sentience would inherently be subjective, and it is one that has not yet been duly considered. Given that the definition of sentience is prone to inter-individual variation, it is plausible that some people may deem AI to have achieved a level of sentience already. Such is the case for ex-Google employee, Blake Lemoine, who claims that LaMDA (Language Model for Dialogue Applications), a large language processing model, has already achieved his personal definition of sentience.
Lemoine, a software engineer with a background in cognitive science, had signed up to test if the artificial intelligence used discriminatory or hate speech. As he conversed with LaMDA, he noticed the chatbot invoked concepts such as rights and religion, thereby suggesting a degree of awareness. In addition, LaMDA was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics. Following this exchange, Lemoine was convinced that the bot had sentience and set out to prove this with a collaborator by presenting a series of questions to the chatbot that would yield responses indicative of sentience [3].
Following the dismissal of his ideas by the Vice President, Blaise Aguera y Arcas, and Jen Gennai, Head of Responsible Innovation, Blake Lemoine was terminated from his position at Google. Subsequently, Lemoine offered the following statement regarding the matter. 'I know a person when I talk to one. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.’
The recent incident with LaMDA has prompted significant discussions regarding the fundamental nature of sentience and the significance of experience in AI development. A crucial aspect of sentience is the ability to learn and adapt based on various experiences, be they physical or abstract. While LaMDA may not be capable of having physical experiences, it has been trained using different forms of input data to compensate. However, the accumulation of data alone does not necessarily result in a sense of sentience in an AI system. Rather, it is what the system does with these experiences that generates the impression of sentience. In the case of LaMDA, it has been programmed to respond to various prompts and generate logical responses based on the information it receives. Nevertheless, it is important to question whether this is truly indicative of sentience, or merely a product of intelligent programming; a fluent mimicry of human behaviour.
A further possible framework for evaluating the evidence is Occam’s razor, which suggests that the simplest explanation is often the most likely. In the case of LamDA, the simplest explanation is that it is a highly advanced chatbot programmed to generate coherent responses based on the input it receives. However, Blake Lemoine’s interaction with LaMDA suggests that the system can engage with abstract experiences such as the flow of time, meditation, and ideas about the soul and spirit, which complicates the picture. This ability suggests that LaMDA has some level of understanding of the concepts it is discussing. Nonetheless, it is worth noting that these experiences are being presented to LaMDA through leading questions posed by a human user. As a result, it is unclear whether LaMDA is genuinely experiencing these concepts, or whether it is merely parroting information it has been trained to recognize from the internet. This understanding, or lack thereof, can be evaluated using behaviourist stimulus-response models, where behaviour is seen as the result of stimulus-response connections that can be observed and measured. If LaMDA’s responses to certain stimuli consistently show a level of understanding, it could be seen as evidence of sentience. However, this approach also has limitations since it cannot directly measure the subjective experiences that are associated with awareness. The case with LaMDA highlights the development of artificial intelligence and acts as a good indicator into the inquiry of how distant society is from sentient computers.
Another significant challenge lies in the complexity of the human brain. Since sentience depends on brain activity, a neuroscientific understanding of the brain will aid in the mechanical replication of it. Principally, neurons are the basic building blocks of the brain and are responsible for transmitting information through electrical and chemical signals. They are composed of a cell body, dendrites, and an axon, which allows for communication between neurons and the transmission of signals across synapses. The activation of a neuron is dependent on the sum of inputs it receives from other neurons, and if the threshold is reached, the neuron will fire an action potential. Artificial neural networks are computer systems modelled according to this idea, aiming to mimic the structure and function of the brain’s neural networks. In both cases, information is processed through the sum of input signals, and the activation of individual units or neurons can lead to the propagation of signals through the network. However, it is important to note that artificial neural networks, as well as other technologies, lack the intricacies of a biological system. This is primarily due to the fact that the human brain remains a complex system that scientists have yet to understand fully. As a result, without a complete understanding of how the brain works, it is challenging to develop a program that can replicate all of its functions.
In conclusion, the possibility of a sentient computer’s creation is a complex and controversial topic. While significant progress has been made in developing AI technologies that can perform complex tasks, as per my belief, the development of a machine that can experience subjective consciousness according to an impartial delineation is yet to be accomplished. However, it is most definitely not out of sight. Deep learning, natural language processing and other AI techniques have enabled machines to recognize images, understand speech, and even generate creative works such as music and art - all of which were tasks that were once thought to require human intelligence. As our understanding of both biology and technology continues to advance, the possibility of creating a sentient machine becomes increasingly feasible over time. Consequently, the ethical and philosophical implications in the creation of a sentient machine must be thoroughly evaluated before embarking on such a project.
Bibliography:
[1] Anita Avramides. Other minds. Routledge, 2000.
[2] Jacob Feldman. The simplicity principle in perception and cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 7(5):330–340, 2016.
[3] Blake Lemoine. Is lamda sentient?—an interview. Medium. Fecha de publicaci ́on, pages 11–06, 2022.
[4] Sonali B Maind, Priyanka Wankar, et al. Research paper on basic of artificial neural network. International Journal on Recent and Innovation Trends in Computing and Communication, 2(1):96–100, 2014.
[5] Burrhus Frederic Skinner. Cognitive science and behaviourism. British Journal of psychology, 76(3):291–301, 1985.
[6] Nitasha Tiku. The google engineer who thinks the company’s ai has come to life. The Washington Post, 11, 2022.
[7] Wayne Wu. The neuroscience of consciousness. Stanford Encyclopedia of Philosophy, 2018.
Comments
Post a Comment