AI-ROBOSEARCH

The Safety and Risks of AI and Bio-AI – Felicia Fernanda, Ta Vu Kieu Nhi, & Christian Chung


TERMINATOR GENISYS T-800

Picture source: http://scifidaily.ru/2015/10/04/32145-prodolzheniya-terminatora-5-genezis-zamorozheny/


The Terminator movie depicted the takeover of humans by machines that gained self-awareness. The latter developed human characteristics and began to fight humanity for control over the Earth. If such were the reality on Earth today, what is the likelihood of us being able to co-exist together with safety?


Based on McCarthy’s (1998) Definition: 


The science and engineering of making intelligent machines that are able to display intelligent behaviour that charaterises themselves and allows them to perceive their surrounding environment. This is sometimes seen as a form of sentience and sapience (Bostrom, 2011).

How Safe Can AI Be?


From the definition alone, it can be concluded that the probability of a smart machine having been built with good intentions is very likely. It is likely that the purpose of AI was initially to make like more efficient for the everyday user of AI, which in time was the aim of such a technology.

However, some risks of AI have been proposed, causing there to be a controversy surrounding the further development of AI. This has caused many researchers and well-known philosophers to be afraid of AI.

Three Perspectives:


Hawking (2014), in an interview with the BBC, claimed that AI could be the determinant factor to the end of the human race. He went on to convey the threat of AI as something that would surpass the slow pace of biological evolution of humans, thus taking over from the humans as the dominant race running the planet. Hence, “humans, who are limited by slow biological evolution, couldn't compete, and would be superseded" in relation to AI (Hawking, 2014), such that AI will possibly destroy humanity within a century (Cellan-Jones, 2014).

Musk (2014), in support of Hawking's perspective that AI is dangerous if it is not kept under tight controls, offered a sum of £10million to projects designed to aid the control of the technology. Musk further added that the rise of AI would be equivalent to "summoning the demon" and turning humans into pets of a super-intelligent race of computers that we helped to create. He also predicted that AI would be more threatening than Nukes.

Kurzweil (2015), however, suggested an evolution from the bio-AI perspective, that “hybrid humans” will be born from integrated technology for space colonisation, giving birth to a race of humans who possess AI technology, and perhaps a complete AI revolution i.e. AI takes over from humans.

But, Is It Worth All The Worry? 


Both Hawking (2014) and Musk (2014) have good points about the risk of AI taking over as the "superior race" from humans, especially as they can improve their intelligence at a much faster rate than humans can. The technology they can build as technology can potentially far supersede anything we know and understand today. For example, not only can AI outsmart the financial market, and probably guess lottery numbers before they come out by figuring out mathematical and statistical possibilities, but they could also develop weapons of mass destruction that we humans may fail to understand.

However, from the prospect of Bio-AI development, the view that AI will take over from the “hybrid humans” that aim to replace us (Kurzweil, 2015) may not be so plausible because technically humans already exist with AI, from those who have bionic arms to clockwork hearts for medical reasons. Therefore, this argument may be refuted.

Extraordinary Inventions Carry Extraordinary Risks:


One cannot deny the fact that AI is an extraordinary piece of technology, with "intelligence", suggesting a personified "smartness" to the technology which allows it to "think" based on a fixed set of imperatives. In the short-term, AI is dependent on its controllers, but in the long-term, it can become a fully self-sufficient, self-maintaining machine that does not rely on human coordination to function, hence the reason it is deemed to be "intelligent" as such (Yudkowsky et al., 2010).  

However, There Are Benefits To This Kind of Technology:


A new version of Cortana is currently being developed as a super-intelligent virtual assistant. The intention is to make people's lives more efficient and not to waste so much time searching for the information one may need at any given time. 

RoboSimian is also known as the "mechanical monkey" due to its specific purpose and design, which was intended to venture out into dangerous zones in disaster areas to aid search and rescue teams in order for humans to stay out of danger themselves. 

Medical robots are being made to assist in surgery in operating theatres to help surgeons to conduct surgery in a more precise manner, making surgery more accurate and efficient.


Andrew Ng's Perspective on Intelligence vs Sentience  


Ng (2014), a chief scientist for the well-known Chinese e-commerce site, Baidu, stated that the difference between intelligence and sentience was very significant. He argued the case for an increase in intelligence not necessarily leading to an assumed increase in sentience. 

What's the Logical Approach, You May Ask?  


It can be said that the logical approach to this argument revolves around looking into what we are made of. Our intelligence, often associated with our brains, is sourced by a neuron system within us that has evolved in our DNAs over millions, or even billions of years. The brain and its intelligence, in that sense, is essentially the most complicated and sophisticated piece of technology that exists at current. So far, no other race nor intelligence has surpassed it, especially not with the current stage of technology that this world has available to us. 

Recently, a comparison was made between the current largest artificial neural networks and the human brain, where the conclusion was that the latter was hundreds of times smaller than the former (Ng, 2015). In this sense, one could suggest that machines are only really good at taking in and processing huge amounts of information in a short space of time, however, machines could never have the same level of consciousness and independent thought as humans have, limiting aspects like creativity and innovation, which would essentially be key factors in their survival and evolution if they were planning to take over. 

So, Where Does That Leave Us? Are We Safe? 


Therefore, the safety of AI is limited by the tasks they can do. In one sense, they may be able to build weapons and try to control human behaviour, but at the same time, humans may be able to think of a way around their "systems" and a figure a way out, which will stop the take over from succeeding. Thus, there is still hope, and who is to say that AI won't be "friendly"? (Yudkowsky et al., 2010).  This is an idea that will be explored in another article.

No comments:

Post a Comment