The Need to Regulate AI | Teen Ink

The Need to Regulate AI

January 31, 2024
By 021127 BRONZE, Excelsior, Minnesota
021127 BRONZE, Excelsior, Minnesota
1 article 0 photos 0 comments

 I love books! Literally, if anyone gives me one of those invaluable bundles of paper I will read it, especially if the genre is sci-fi or dystopian. Whether it’s a robot apocalypse or humans living on a totally different planet, these genres open up the world of technology and how it just keeps expanding. Technology is what shapes our lives and one of the most popular ones of today is Artificial intelligence. I mean, it’s pretty much what we see as the future. Robots cooking first class meals with perfect precision, drones delivering our groceries, and vehicles driving themselves. It seems like a dream, but it is slowly becoming reality. Many people are awed and amazed by the skill of this technology. But what few are thinking about are the dangers of AI. 

For example, according to an article written by Alexandra S. Levine, a Forbes staff member, Krishna Sahay who is a very famous Tik Toker created a viral fake news segment. This was an interview of himself, who according to the segment, was the only survivor in a recent school shooting! What makes this even more shocking is that the anchor was Anne-Marie Green, a very popular anchor who works for the CBS news! The whole video was created by using AI. Imagine the impact this could have on our society. Parents, teachers, students; all having so much fear and anxiety instilled in them. And this is just one of the many examples of AI related problems.

 The government needs to regulate AI because of its privacy and security risks, bias, and high job displacement. In this article, we will first open our books to learn about AI, next flip the page to go over my reasoning behind the regulations, and lastly close it with some solutions to consider.

       Let’s first skim through what AI is. Most of us have heard of this term in the news, sci-fi movies, and school, in fact since my parents work in the IT industry, it’s a very familiar dinner topic to me. According to Britanica, artificial intelligence, or AI, refers to a computer or computer controlled robot having the abilities and knowledge that is commonly associated with intelligent beings. The earliest mention of AI was by British mathematician and computer trailblazer, Alan Turing. Mr.Turing had stated that one day in the future, computers will have the same capabilities and skill as a human. To prove this, he said, there would be a unique test where a human and robot would be asked identical questions. Both human and robot were hidden from view. At the end of the test, no one would be able to distinguish which one was taken by the human and which by the robot. Today this test is called the Turing test. Since then, scientists have been trying to create technologies using Artificial Intelligence. Amidst the tasks that are perceived as AI, scientists have studied Game playing, natural-language understanding, fault diagnosis, robotics, and supplying expert advice. Today the technology of AI is very popular and commonly used. Most people use it on an everyday basis, and don’t even know. For example if you’ve asked Siri or Alexa something, you were just using AI! But, though AI seems appealing, there are many hazardous risks associated with it. Beginning with privacy and security risks. 

       Have you ever searched for something on the internet, for example, Skittles, and a bit later that day you start to see lots of ads about skittles? Well, that’s AI in action. Most companies give our personal information and data to data mining companies. According to the Economic Times, this includes names, addresses, financial information, medical records, and even social security numbers! Data mining companies then use our information to analyze so that they can figure out what our hobbies and interests are. But according to the Office of The Victorian Information Commissioner, now these companies are using AI. AI will be able to go very quickly and thoroughly through our information than trained people. Now, thanks to the mechanism of AI, ads that are supposed to show up in days, are personalized for customers in near real time. This puts a high risk at our personal information since more of it is being efficiently monitored and effectively used. Another concern is AI’s power getting into the wrong hands. AI is a tool that many people have access to. According to an article in Forbes, published this summer, as AI gets more sophisticated, the accessibility allows hackers and spiteful actors to exploit system vulnerabilities and use AI for more advanced and impactful cyberattacks. Today, AI is being used for autonomous weaponry. This raises a huge concern, about rogue-states or non-states actors using the technology, especially when we consider how humans have almost no control on the critical decision making process.

       Typing passwords can be a pain which is why many people use facial recognition technology. But, this technology keeps repeating and aggravating racism, sexism, and religious discrimination! According to procon.org, facial recognition technologies are effortlessly able to identify white men but fail 34% more of the time when trying to identify the face of a black woman. In addition an article from Harvard University talks about how AI surveillance cameras have been used disproportionately in neighborhoods that have many people of color. For example, in 2016 the surveillance program Project Green Light was enacted. The program allowed many security cameras to be installed throughout Detroit. This could be used by the PD to identify citizens against criminal databases, driver’s license, and state ID photos. Pretty much everyone who lived in Michigan was on this system. But, the catch was the cameras weren’t distributed evenly through the city. The majority of these cameras were in places where there were many Black and Hispanic people and barely any in places having a majority of Asian and White people. And this isn’t the only shocking example.  99% of NYPD’s gang affiliates database, containing 42,000 people, is Black and Latinx people! Facial recognition uses this data to predict criminals, add on to sentencing, and get rid of bail. To make situations worse, lots of these reports are false. Facial recognition is also being exploited to target Muslim people. In China, a Muslim group called the Uyghur, has been facing huge inequities. According to an article in The Guardian, China has installed facial recognition cameras all around the regions they’re living in. This system is very much contributing to the truly unfair arrests, torture, and horrible camps that the Uyghur and other ethnic minorities are undergoing. 

       As AI’s capabilities gain more width and depth, the concern of job displacement raises sharply. According to an article written in the New York Times, earlier in 2023, Sam Altman, the chief executive of Open AI himself stated in a trial that, “There will be an impact on jobs,”and also added, “action by government,” would be required. The speculation is that millions of jobs could be fully automated by AI. Infact, according to an estimate by Goldman Sachs, the number could be equivalent to 300 million full time jobs globally. The scariest thing with AI is that it is unlike other automations. According to Harry Holzer, an economist at George Town, if you lose your job because of AI it is onerous to find something new. AI is a moving target that just continues to grow and take over more tasks. However, one thing to note is that it will not just increase unemployment numbers but also widen America’s already huge  income and wealth inequality by allowing a couple to make billions and others to make a bit. A study done by the bureau of industry and security, which involved using statistics from over 86 countries, showed that investment in AI is associated with higher economic inequality.

       AI has many big problems, risks, and challenges to it but the government regulating AI can help address safety and privacy concerns, ensure ethical standards and prevent the misuse and abuse of AI systems. But what should these regulations look like? One step to take is to invest in the development of unbiased algorithms and diverse training data sets. This will encourage an unbiased ecosystem. Another step that will mitigate security risks is creating security standards that are applied globally. The government should also create strict data protection regulations and safe data practice. The government should proactively identify job displacements that will happen because of AI and help retrain those who will be affected with alternate career paths. 

       AI doesn’t have to be the evil we see in dystopian novels. It has a ton of potential but it’s up to us to harness the best of it to serve humanity. When the government starts implementing regulations on AI we will be headed towards a sustainable and successful future.


Bibliography:

“Artificial Intelligence Summary.” Encyclopædia Britannica, Encyclopedia Britannica, inc., www.britannica.com/summary/artificial-intelligence. Accessed 23 Jan. 2024. 

Levine, Alexandra S. “In a New Era of Deepfakes, AI Makes Real News Anchors Report Fake Stories.” Forbes, Forbes Magazine, 13 Oct. 2023, www.forbes.com/sites/alexandralevine/2023/10/12/in-a-new-era-of-deepfakes-ai-makes-real-news-anchors-report-fake-stories/?sh=6d71630157af

“Ai and Privacy: The Privacy Concerns Surrounding AI, Its Potential Impact on Personal Data.” The Economic Times, economictimes.indiatimes.com/news/how-to/ai-and-privacy-the-privacy-concerns-surrounding-ai-its-potential-impact-on-personal-data/articleshow/99738234.cms?from=mdr. Accessed 28 Jan. 2024.

“‘ca1’ and Victoria Police (Freedom of Information) [2020] Vicmr 248 (4 September 2020).” Office of the Victorian Information Commissioner, ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/c. Accessed 28 Jan. 2024.

Marr, Bernard. “The 15 Biggest Risks of Artificial Intelligence.” Forbes, Forbes Magazine, 5 Oct. 2023, www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=6afbf5b72706.

“Is Artificial Intelligence Good for Society? Top 3 Pros and Cons.” ProCon.Org, 29 Nov. 2023, www.procon.org/headlines/artificial-intelligence-ai-top-3-pros-and-cons/.

SITNFlash. “Racial Discrimination in Face Recognition Technology.” Science in the News, 26 Oct. 2020, sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/.

“‘There’s Cameras Everywhere’: Testimonies Detail Far-Reaching Surveillance of Uyghurs in China.” The Guardian, Guardian News and Media, 30 Sept. 2021, www.theguardian.com/world/2021/sep/30/uyghur-tribunal-testimony-surveillance-china.

Goldberg, Emma. “A.I.’s Threat to Jobs Prompts Question of Who Protects Workers.” The New York Times, The New York Times, 23 May 2023, www.nytimes.com/2023/05/23/business/jobs-protections-artificial-intelligence.html.

Kelly, Jack. “Goldman Sachs Predicts 300 Million Jobs Will Be Lost or Degraded by Artificial Intelligence.” Forbes, Forbes Magazine, 4 Oct. 2023, www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/?sh=34c9dcd7782b.

Cornelli, Giulio, et al. “Artificial Intelligence, Services Globalisation and Income Inequality.” The Bank for International Settlements, 25 Oct. 2023, www.bis.org/publ/work1135.htm.


The author's comments:

I’m a teen who is nerdy and into the arts at the same time. I love to sing, play cricket, and am a coder. I’m learning python and have been certified by the Python Institute. Technological advances has always intrigued me, and no differently, AI has brought great interest and curiosity to my sponge-like brain. This attraction is what inspired me to write an essay which explaines why and how AI should be regulated by exploring the impacts it will have on our world.


Similar Articles

JOIN THE DISCUSSION

This article has 0 comments.