All Nonfiction
- Bullying
- Books
- Academic
- Author Interviews
- Celebrity interviews
- College Articles
- College Essays
- Educator of the Year
- Heroes
- Interviews
- Memoir
- Personal Experience
- Sports
- Travel & Culture
All Opinions
- Bullying
- Current Events / Politics
- Discrimination
- Drugs / Alcohol / Smoking
- Entertainment / Celebrities
- Environment
- Love / Relationships
- Movies / Music / TV
- Pop Culture / Trends
- School / College
- Social Issues / Civics
- Spirituality / Religion
- Sports / Hobbies
All Hot Topics
- Bullying
- Community Service
- Environment
- Health
- Letters to the Editor
- Pride & Prejudice
- What Matters
- Back
Summer Guide
- Program Links
- Program Reviews
- Back
College Guide
- College Links
- College Reviews
- College Essays
- College Articles
- Back
The Militarization of AI
AI is rapidly taking over every aspect of our lives, however the militarization of AI is a concept that most people aren’t yet aware of. Even if people are aware, they simply cannot understand the gravity of the threat that AI in the military poses. This article is a mini case study that aims to explore the most pivotal aspects in this issue in a concise yet informative manner.
Recent Events
- NATO's Artificial Intelligence Strategy (2021)
- International Conference on Robotics and Automation for Humanitarian Demining and Counter-Terrorism (2022)
- The Hague Conference on Responsible Use of Artificial Intelligence in the Military Domain (2023)
- UN Secretary-General's Report on Addressing the Challenges of Lethal Autonomous Weapons Systems (2023)
- Explainable AI for Defense Applications" Workshop (2023): This U.S. Department of Defense workshop
Key Actors:
- Governments of countries like the USA, China & Russia are at the forefront. Specific organizations include the US Department of Defense (DoD) who have established the Joint AI Center & the Chinese Ministry of National defense
- Defense contractors like Lockheed Martin & BAE systems are looking to expand their military portfolio by integrating AI
- Several academic and research institutions such as MIT and Japan’s National Institute for New Generation Computing are publishing papers on ethical guidelines. The scientific community is very wary of these advancements.
- Internationally, the UN Office for Disarmament Affairs & the EU are leading the way on ethical AI in the military
- Finally, several tech corporations are looking to get a piece of the pie. Recently, OpenAI lifted its ban on the use of generative AI for military use.
Advantages of General Purpose Use ( “Beneficial Uses” ) :
- AI can be utilized to inform strategic decisions such as troop deployment or resource allocation which not only saves the lives of soldiers but also prevents the wastage of financial resources
- Indirectly AI helps in supply chain optimization for the delivery of equipment and supplies
AI combined with virtual reality can be used to simulate combat scenarios and enhance soldier training experiences
- The AI-based targeting systems can speedily process vast amounts of data to prioritize and assign thousands of targets for both piloted aircrafts
- Missile detection and prevention systems
- Border patrol and border surveillance systems equipped with identification systems
* Bad Actors : However, the open source and decentralized nature of AI means that bad actors can use this to their advantage. Several terrorist groups, militias, rogue nations, ideology-based groups, malicious non-state actors and other such entities also have access to AI development and can use this to their advantage. The proliferation of AI means it’s becoming more accessible and cheap and that just opens the avenues for its exploitation. For example, autonomous systems could be used for swarming attacks, but swarms could also be used defensively.
Autonomous Warfare & Drone Strikes:
What are some examples?
Autonomous stationary sentry guns : The 1st of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition would be the SGR-A1, jointly developed by Hanwha Aerospace and Korea University to assist South Korean troops in the Korean Demilitarized Zone in a highly classified project.
Autonomous killer robots : “Slaughter bots” are autonomous robotic systems able to select and attack targets without intervention by a human operator. Some even function without any humans in the loop and are currently being tested in several countries. Therein, the decision to deploy lethal force is delegated to a machine. This could be applied to battle tanks, fighter jets and much more. Such autonomous weapons could be trained to operate in coordinated platoons to overwhelm enemy defenders.
Autonomous drones and swarms: In October 2013, the United States Strategic Capabilities Office launched 103 Perdix drones, which communicated using a “distributed brain” to assemble into a complex formation, travel across a battlefield, or regroup into a new formation. The swarm was created by MIT engineering students using commercially available components and design. In theory, drone swarms could be scaled to tens of thousands of drones to become future weapons of mass destruction
These drones can make decision about the use of lethal force on their own and can even strike targets based on outputs from facial recognition and image recognition software. By using machine learning they can decide which targets and locations to strike on the spot!
What are the benefits?
They can engage in wars without risking the lives of military personnel
Act instantaneously on receiving commands and avoid the interference of human negligence, empathy or hesitance
Provide highly targeted military applications which can avoid injury to unintended targets
The Moral Dilemmas?
The question on whether AI should be given the control on life and death scenarios is highly debatable
Some argue that this inherently dehumanizes the people targeted and creates a culture that is tolerant of widespread killing.
It raises the question on who should be blamed if an AI accidentally kills innocent bystanders ? Moreover, once cannot control the end decision of autonomous AI.
The removal of human common sense – the ability to look at a situation and restrain from authorizing lethal force, even in the face of indicators pointing to the use of force is also another drawback of these systems
Global Rat Race:
Integration of technology for military applications is a lucrative and profitable industry for several companies and agencies. The advent of AI has the potential to accelerate the development of far more dangerous military applications. Every nation aims to gain a competitive advantage, a sense of superiority and an AI backed military is definitely an ideal way. This competition can escalate tensions and increase conflicts, furthermore, the increased proliferation can lead to uncontrolled development of militarized AI.
The rapid development of AI and its technical complexity makes it challenging for regulations to keep up. International cooperation is difficult; effective regulation requires consensus among nations, which can be difficult due to divergent national interests. In addition, the dual-use nature of AI technology (i.e. its use for civilian and military purposes) complicates regulation. Furthermore, selling this technology to military heavy countries is a highly profitable trade avenue!
Futuristic Possibility:
With the rise of robotics and artificial general intelligence, the possibility of army divisions being completely AI robots is a realistic idea. Whether it be soldiers similar to the Boston Dynamics robot or a collection of autonomous robot dogs, tanks, drones. Some argue this is actually much more humane while others feel this is just the start of another cold war, another race on the militarization of AI. Integral missions today already use AI so who is stopping entities from creating entire AI powered armies?
Remember, this is just the beginning...
Similar Articles
JOIN THE DISCUSSION
This article has 0 comments.
I am really passionate about ethical AI and this post is exploring the doubled edged nature of AI specifically in the context of its role in militarization