We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Government Regulation of Artificial Intelligence 

The whole doc is available only for registered users

A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteed

Order Now

The words Artificial Intelligence induce immediate apprehension at the thought of where exploring this technology will take the human race. Most often, we skip other types of AI and jump straight to robots-ending-the-world-as-we-know-it, evidenced through societies anxieties reflected in our entertainment, such as the notorious Terminator films. AI can be defined as a system possessing the capability to accurately process, and learn from, external data, using the knowledge to adapt and accomplish assignments. Each of the three types of AI – Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) – require separate consideration when determining AI regulations, through the combined global and national efforts of governments in order to ensure better control of potential risks.

ANI’s are the simplest of all AI, applied to particular domains without the ability to solve problems in other areas autonomously. In universities like Technical University of Berlin and Carnegie Mellon, chat bots are being studied for their consideration in improving efficiency of teaching and learning. Elsevier owner, British RELX Group, uses ANI to check for plagiarism, statistic manipulation, and automated, methodical literature reviews (Kaplan & Haenlein 2019). While helpful, and seemingly non-threatening due to its reliance on humans, the ANI’s ability to process and learn from external data allow for the potential creation of sophisticated viruses that could impact hardware globally. Military AI systems could be affected in numerous ways, including a command error causing an attack on humans and loss of control of nanobots in a terrorist attack damaging the human population. If the AI lives in the internet, it could manipulate news, starting wars, and human behavior through blackmail (Turchin & Denkenberger 2018). Regulation of the development of ANI is necessary to ensure as many safeguards as possible are in place as this technology inevitably advances.

AGI, a more sophisticated AI, can perform multiple problems at once, in a human-inspired manner. Developed by Campus Management Corporation, RENEE (Retain, Engage, Notify, and Enablement Engine) opens up the possibility of monitoring student awareness during virtual classes by analyzing facial expressions. In the future, RENEE has the potential to process emotions of students to improve teaching strategies, catch cheating during exams, answer student questions and perform grading for the professor. The US Army uses an SGT Star AI system, also capable of recognizing emotions, to assign candidates to their recruiter, answer questions, and review the candidates qualifications – performing an equivalent workload of 50 recruiters! AI robotic soldiers may be considered in the future; however, researchers and scientists alike, such as Stephen Hawking and Elon Musk, are cautioning advancement in this direction. Appealing to the UN, an open letter written in 2013 by over 100 company leaders, researchers and experts on security, requested a ban on AI robots in war (Kaplan & Haenlein 2019).

Each of the three types of Artificial Intelligence will be capable of providing increased efficiency in countless areas, assist in advanced explorations of science, and even manage our increasingly scarce resources. The benefits are vast, but also have consequences if left unchecked through its evolution and progression from ANI to ASI. ASI will have the ability to process all problems simultaneously and outperforms humans in all areas. To understand the incredibility of that power, consider that a 200 dollar standard microprocessor is already 10 million times faster than human neurons right now, and a computer can accept and hold more information in 1 second than a human could in an entire lifetime (Kaplan & Haenlein 2019). Take a minute to contemplate and digest these facts. That means we will not be able to understand, relate, compete, control, put a stop to, or even limit an ASI in any capacity. Whatsoever. Nor will we be able to comprehend knowledge, or think, on the same level as an ASI, who will surpass us exponentially. ASI lack intuition and human emotion, amongst other characteristics, and should not be left unregulated. It is imperative all governments step up, forming regulations together to ensure the safety of the human race, before it’s too late. The regulating should begin now, before complete AGI is achieved and ANI manipulation begins.


Kaplan, Andreas, and Michael Haenlein. “Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence.” Business Horizons, vol. 62, no. 1, 2019, pp. 15–25., doi:10.1016/j.bushor.2018.08.004.

Turchin, Alexey, and David Denkenberger. “Classification of Global Catastrophic Risks Connected with Artificial Intelligence.” Ai & Society, 3 May 2018, doi:10.1007/s00146-018-0845-5.

Related Topics

We can write a custom essay

According to Your Specific Requirements

Order an essay
Materials Daily
100,000+ Subjects
2000+ Topics
Free Plagiarism
All Materials
are Cataloged Well

Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email.

By clicking "SEND", you agree to our terms of service and privacy policy. We'll occasionally send you account related and promo emails.
Sorry, but only registered users have full access

How about getting this access

Your Answer Is Very Helpful For Us
Thank You A Lot!


Emma Taylor


Hi there!
Would you like to get such a paper?
How about getting a customized one?

Can't find What you were Looking for?

Get access to our huge, continuously updated knowledge base

The next update will be in:
14 : 59 : 59