ETHAN STOYANOV
Artificial intelligence involves trying to create intelligence using computers, aiming to electronically replicate human intelligence and ideally the human consciousness. Artificial intelligence, ‘AI’, is a rapidly growing global market with growth being forecast to grow by 16.4% year over year in 2021 to £238.5 billion. However, despite AI’s large value currently, it was only recently like this.
In the 1940s the programmable digital computer was invented which was based on the abstract essence of maths. This was the building block for a number of scientists who wanted to use this device to create an electronic brain. Research into AI began in a workshop on the campus of Dartmouth College in 1956. Many of the people attending were hopeful of the future for AI, predicting a machine of human intelligence in just 30 years or so. They were funded millions of dollars by the government to help enable their vision to become a reality. However, by 1973, British and US governments stopped funding research into AI which was generally undirected and limited by slow processing speeds. The challenging years following this were known as the ‘AI winter’.
AI never really took off until the 21st century began, when more powerful chipsets became abundant. This mostly came in the form of machine learning which was also enabled by very large data sets. Machine learning is essentially a subset of artificial intelligence which uses algorithms which can be improved automatically as they learn from present and past data (known as training data). The algorithms are then used to make decisions or predictions using the training data. This learning eliminates the effort of having to program explicitly, therefore reducing human interference and so increasing time and cost efficiency. These data sets can be ‘mined’ without supervision, in exploratory data analysis (e.g. analysis for summarising important characteristics of data sets). This is known as data mining which is a related field of study to machine learning.
AI has many applications, for example voice assistants and voice recognition, which uses machine learning to analyse data from real interactions to improve itself. By analysing words/phrases used in searches, for example, voice recognition knows which words are most likely to go together and therefore deduce what a complex sentence normally means. This type of analysis uses ‘natural language processing’ (NLP) which involves interactions between human language and computers to help allow the voice assistant to understand the human. It also uses ‘natural language generation’ to respond like a person would.
The machine learning used in natural language processing involves a subset called ‘deep learning’ which is essentially a model involving neural networks which have many ‘layers’. These include an input layer and one or more ‘hidden layers’; then there is an output layer. To explain it simply, these act like a human brain. More layers (columns in diagram) and more nodes in each layer allow the ‘brain’ to make more complex calculations and so decisions. The nodes (circles in the diagram) have connections which act like synapses in the brain which transfer real numbers (commonly -1 to 1) from each node of one layer to each node of the next, applying an "activation" function to the numbers at each node. The strength (known as weight) of each node’s signal (how much it transforms the input data) can also be altered and there can be certain thresholds for neurons taking input. Each node takes the sum of its inputs*weights and uses an ‘activation function’ to decide ‘how much’ input the node needs before it gets triggered.
This deep learning uses ‘feature learning’ to automatically find the representations needed to detect ‘features’ or to classify from raw data using a computer algorithm. This has changed how voice recognition and NLP work fundamentally in recent years. For example, when neural networks are given millions of voice samples with the words which correspond to them, the networks learn to make the software which converts voice commands to text.
Another example of AI’s application is in online ads. This is a large industry which takes advantage of AI to track people’s statistics and then gives us ads using those statistics. For instance, people who look up something dog-related would be more likely to see a dog toy advert. So without AI being used in online ads, they would all be random and their effectiveness would lower as they would fail to target people who are most likely to be interested.
Unfortunately, the previous examples mentioned, and all AI developed currently, are known as ‘weak’ or ‘narrow’ AI. This means that they can only perform specific tasks using intelligence. A good example of this would be an AI designed for chess as the algorithms it uses are more powerful than a human opponent, however its limitation is that chess is the only concept it has programmed functions for. Anything outside of that field for the narrow AI cannot be done as it is unable to make abstract and general decisions. Another example is that even a voice assistant cannot understand a simple joke as it is not within its range of programmed functions.
The solution to this would be for the AI to become ‘general’ meaning that it can perform all intellectual tasks which a human can perform and with a human’s efficiency. This could be applied to a voice assistant to help it understand a joke without it being specifically programmed for that. For example the assistant could use a range of functions which would allow it to take input from someone’s voice which would explain in this case how the joke works and specifically what aspects of the joke make it funny to people. This would allow the AI to start to learn how jokes work.
However, some parts of the AI may need to be explicitly programmed to an extent, for example morals; it could be argued that human morals are often illogical with many ‘double standards’ which stem from human evolution. Therefore an AI would have to be told to follow human morals which it could learn through observation or direct input from a certain group of people who are setting the standard. Therefore, a large part of creating a general AI would be to understand ourselves, how we think and how our society works, introducing the field of psychology to cross over with the field of AI.
"a large part of creating a general AI would be to understand ourselves, how we think and how our society works"
However, making an AI more intelligent doesn’t necessarily mean it has to have specific emotions itself, as what makes a joke funny to a person for instance could be argued to be again a partially illogical side effect of animal and human evolution. But if the AI could understand what makes a joke funny then it could recognise it and if we wanted for it to respond like a human then it could laugh to ‘fit in’ with people, for example. This would in essence be giving it ‘theory of mind’. Hopefully, self awareness could be created too using this method of programming abstract algorithms for the AI and using machine learning, by allowing the AI to learn what ‘I’ means for example, and what identity means so that it can understand its own function/personal qualities and therefore its identity.
Another method of creating a general AI would be by scanning aspects of a human brain and transferring it into code involving neural networks, though it is possible due to insufficient technological advancements this may not be feasible to be done before the first method described has already been developed. However, this method means that humans could upload another version of themselves electronically, which would possibly be desirable but brings ethical issues just like cloning. If someone did this, it would be like there was another person identical in terms of their personality. So if this is carried out then an onlooker may believe it was the same consciousness of the initial person but the initial person used for the AI would just observe another artificial person like them.
Another way a general AI could be created is through simulating evolution from a very simple life form and trying to give the AIs the same environments over time that human ancestors would have been subject to, to promote brain development for certain activities, e.g. social relationships, travelling throughout grasslands and hunting etc. However, many things could go wrong with this method e.g. we do not know the exact conditions/environment of our ancestors; mutations and randomness that created the humans we have today are not easily simulated, and there are possible constraints in essentially using huge amounts of processing power to ‘fast-forward’ time. Simulating an entire continent may require too many calculations at once, for example. However, when an AI like a human is developed, the AI could be allowed to access the world we are living in through sensors e.g. a microphone and camera etc. to let the AI human understand our world and what the AI itself is.
The final type of AI categorised in terms of capability is ‘super AI’ (or strong AI), this is where the AI will surpass human intelligence in every cognitive process, essentially making it the final outcome of general AI. It essentially has the ability to reason, think, plan, learn and communicate on its own and at a higher level than humans. Global researchers are currently focused on developing machines that have general AI and so it is likely that super AI will emerge soon after. Processing power is unlikely to be a setback as around 10 quadrillion calculations per second (cps) is needed to simulate a human brain, which has already been significantly exceeded by ‘supercomputers’.
Possibly, an artificial intelligence "singularity" will happen with super AI whose intelligence will increase exponentially because as it gets smarter, it improves itself more and more. Computational power restrictions would unlikely be an issue as if an AI had the motive to increase its intelligence to perform more advanced actions, then it could create more computers etc. The last two methods of achieving general and then super AI described would likely be the most dangerous as the AIs created would have motivations similar to humans for example boredom and enjoyment which could lead to behaviours which are destructive for human society. This is dangerous as it would almost certainly be too late to stop a super AI once created, given that it would have an incomprehensible intelligence. Its undesirable actions would be unstoppable. However, if AI can be beneficial to human society it will likely improve our society by an immeasurable amount, making research into it desirable. Likely, when super AI is achieved, it will be the last invention of humanity.
Bibliography
-, Ben Dickson, et al. “The Fundamental Problem with Smart Speakers and Voice-Based AI Assistants.” TechTalks, 3 Sept. 2018, bdtechtalks.com/2018/09/03/challenges-of-smart-speakers-ai-assistants/.
By: IBM Cloud Education. “What Is Deep Learning?” IBM, www.ibm.com/cloud/learn/deep-learning.
“Feature Learning.” Wikipedia, Wikimedia Foundation, 26 Feb. 2021, en.wikipedia.org/wiki/Feature_learning.
“History of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 10 Apr. 2021, en.wikipedia.org/wiki/History_of_artificial_intelligence.
“IDC Forecasts Improved Growth for Global AI Market in 2021.” IDC, www.idc.com/getdoc.jsp?containerId=prUS47482321#:~:text=The%20AI%20Services%20category%20grew,reaching%20%2437.9%20billion%20by%202024.
“Machine Learning.” Wikipedia, Wikimedia Foundation, 15 Apr. 2021, en.wikipedia.org/wiki/Machine_learning.
“Natural Language Processing.” Wikipedia, Wikimedia Foundation, 7 Apr. 2021, en.wikipedia.org/wiki/Natural_language_processing.
“Types of Artificial Intelligence - Javatpoint.” Www.javatpoint.com, www.javatpoint.com/types-of-artificial-intelligence#:~:text=Types%20of%20Artificial%20Intelligence%3A%201%201.%20Weak%20AI,Memory.%205%203.%20Theory%20of%20Mind.%20More%20items.
“What Are Neural Networks || How AIs Think.” YouTube, YouTube, 21 May 2018, www.youtube.com/watch?v=JeVDjExBf7Y.