The Emerging Threat of Full Artificial Intelligence - Intelligence Explosion (IE)

comments 0

Comment

share

Share

0

Rate

Ng Jun Jie, Shawn's picture

Nobel Laureate Herb Simon once said in 1958; “There are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly.” Fast-forward today; many experts believe that there is indeed substantial opportunity to achieve full Artificial Intelligence (AI) before the year 2100 through developing advanced algorithms and enhancing computing power. The prospect of attaining full AI has raised many questions from different industries on AI’s safety and possible threats that it brings. This is clearly evident in the Hollywood industry with Stephen Hawking and Elon Musk claiming that AI will exceed human control and threaten humanity. This is aligned with the views of worldwide AI academics such as Nick Bostrom who warned about the dangers that AI can bring to society and has called for more emphasis to deal with such challenges before it is too late.

Full AI is classified into two main categories: human-level AI and superhuman-level AI, which refer to systems that match and exceed human intellectual capacity respectively. The creation of human-level AI has brought about both benefits and consequences such as technological unemployment. However, such consequences are still within human control and it is unlikely that it will threaten humanity. We should instead be more worried for a bigger and problematic phenomenon today called “Intelligence Explosion (IE)”. The failure to account for IE could result in irreversible implications such as depletion of our resources, possibly threatening the fate of humanity.

Therefore, we should recognize that we are close to achieving superhuman-level AI and should begin to look beyond the challenges associated with human-level AI. We should instead focus today on managing and fully understanding IE in order to preserve humanity.

Positive Impacts – Medical Advancements:

The creation of human-level AI has created unprecedented opportunities to address humans’ health concerns, the ageing population and key demographic problems. This is clearly evident in the medical industry whereby AI could develop new medical technologies that are capable of saving many human lives. Artificial Intelligence in Medicine (AIM) systems enhances the efficiency of scientific research and creates new medical knowledge through leveraging on its greater capacity to learn. For instance, AI computers can engage in data mining and analytics to discover new vaccinations that can be used to save human lives.

AIM systems also enhance the operational efficiency of medical workers and provide assistance with tasks that involve manipulation of data. For instance, AI systems could be tasked to monitor the electronic medical conditions of patients and provide instantaneous notification to the clinician when it detects patterns in clinical data that suggested significant changes in their conditions. Such AIM systems thus enhance the operational efficiency of the medical industry and provide timely information that is crucial to saving precious lives.

Negative Impacts - Technological Unemployment:

Just as the creation of human-level AI offers benefits to society, it has its own drawbacks such as technological unemployment. Human-level AI machines today provide a cost-effective alternative to businesses as they are capable of performing human tasks more efficiently as compared to humans’ output. Being in a free economy, businesses will thus switch to using these machines in preference to humans because such substitution will allow them to reap huge economies of scale (EOS). Consequently, such automation will result in huge displacement of people from both routine and creative jobs, resulting in the loss of income and lower standards of living.

 

 

 

 

 

 

However, such mobility and skill restructuring challenges are inevitable of a progressing economy and governments worldwide can mitigate these challenges through introducing innovative economic policies to help people move up the productivity ladder. Although these challenges do exist, they do not severely threaten the basic survival of humans as such consequences are merely short-term and reversible. Through skill upgrading, people will be able to possess the necessary skills required to prevent job displacement.

An Emerging Threat - Intelligence Explosion (IE): The creation of superhuman-level AI is attributed to human development and advancement in human-level AI. This has led to the emergence of a new threat: Intelligence Explosion. Unlike the challenges that human-level AI creates, IE poses a much more difficult challenge because humans are unable to control these superhuman-level AI machines, which have far greater intellectual capacities. Although there is still no concrete evidence whether IE is possible, recent research studies proved that IE could occur at software speed and reach capability levels that exceed humans’ domains of interests. To better understand the IE phenomenon, let us first examine the mechanics of IE:

Mechanics of IE: Suppose human programmers create a machine that is more capable than today’s computer, this machine can form hypothesis from the information gathered, use the hypothesis to make plans and subsequently execute these plans. Such machines could also be used to conduct AI research to produce more efficient next-generation computers. It is due to this self-improvement goal of creating more efficient computers that triggers a positive self-enhancement mechanism. When the machine improves its own intelligence, it further strengthens this intelligence mechanism that creates further improvement. This in turn stimulates series of self-improvement cycles, leading to the creation of superhuman-level machines with infinite capabilities exceeding that of humans. It is this rapid surge in AI’s capabilities that leads to the IE phenomenon.

It must also be noted that IE is not uniform and has a tipping point. For instance, the threshold indicated in Figure 1 represents this tipping point. Below the threshold, humans will be able to largely benefit and exploit this new intelligence without having fears of being able to manage the risks of IE. However above that threshold, IE could transform AI from a benign AI into a dangerous AI that is difficult to control and therefore requires more safety thought before deployment. But how does going beyond this threshold threaten humanity?

Hazards of IE: To better understand these threats, I have decided to cross-reference two similar characteristics of AI to that of microorganisms. The two characteristics that made microorganisms difficult to handle include: (1) Microorganisms are goal oriented as they seek to multiply and expand their population. (2) Microorganisms are also chain reactive whereby they spread to grow their zone of influence. Human hazards can arise because microorganisms’ values are not aligned with human values. Chain reactivity exacerbates this problem since small releases of microorganisms can surge into large population, resulting in greater problems such as pandemics outbreaks.

This analogy resembles the behavior of AIs. Most AIs are task-oriented and are designed by humans to complete a task. AIs are also chain reactive as they seek to improve its own intelligence to achieve its goals more efficiently. With enhanced intelligence, superhuman-level machines will now have further abilities to create new intelligence that operate independently. For instance, machines could use this new intelligence to maximize the production of paperclips in an unethical manner that is not in the best interests of humans. Thus, due to their characteristics, sharing an environment with these AIs will be hazardous, as superhuman-level AIs will unknowingly acquire our scare resources to achieve their goals. Moreover, there are currently no assigned tasks that are compatible with human safety and humans will face uphill task to control these AIs due to exceedingly great intellectual capabilities. Thus, this poses an existential threat to humanity as the conflicting interests between AIs and humans will spur a fierce competition for scare resources in which AI machines have greater capacity to win.

Moving Forward - Some Final Thoughts:  In summary, attaining full artificial intelligence will allow humans to go beyond their finite abilities to discover new vaccinations needed to save lives but poses other challenges such as technological unemployment. Such challenges are within human control and we should instead turn our focus to address the emerging threat of IE. Due to takeoff speed of AI, whether IE will lead to existential extinction remains unclear. But what is clear is that IE poses challenges that are far greater than that of human-level AI. Due to superhuman-level machines’ greater intellectual capacity, superhuman-level AI could result in severe implications, as they will utilize our resources to achieve their goals. Given such implications, it is thus essential that we fully understand the concept of IE before eventually deciding to deploy such advanced AI.

Moving forward, more research should be conducted to establish a precise predictive theory of IE and analyze its self-improvement rate. Such research will provide new insights required to manage these very powerful AI machines. In addition, the chain reactivity characteristic of AI poses an even greater danger to humanity. Even if precautions are taken, it is still very difficult to wholly prevent an AI outbreak, as it will just take one irresponsible party to release AI into the world. This will stimulate the self-improvement mechanism and subsequently result in more severe impacts that are beyond human control. Thus, there is a mandatory need for constructive frameworks, engagement in risk assessment and the formulation of international agreements to effectively manage AIs.

References: 

Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B.,Sowa, J. F. (2012). Mapping the landscape of human-level Artificial general intelligence.

Armstrong, S., Sotala, K., & O Heigeartaigh, S. (2014). The errors, insights and lessons of famous AI predictions – and what they mean for the future. 

Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results: Technological Forecasting & Social Change 

Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations.

Bostrom, N. (2013). Existential risk prevention as global priority : Global Policy

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

Dreyfus, H. L. (1972). What computers still can't do: A critique of artificial reason (2 edition.). Cambridge, Mass.: MIT Press.

Shoshana L. Hardt & William J. Rapaport (1986). Recent and Current Artificial Intelligence Research in the Department of Computer Science, State University of New York at Buffalo