U
U
U
1. Introduction Letters
a. Secretary General
b. Committee Director
c. Committee Assistant Director
2. Introduction to the Topic: Autonomous Weapons and AI in Warfare
3. Powers of the Committee: The Disarmament and International Security Committee
4. History of the Topic
a. Emergence of Lethal Autonomous Weapons (LAWs)
b. Usage of Artificial Intelligence in Warfare
5. Current State of Affairs
a. Modern forms of LAWs and AI in Warfare
b. Usage of this technology in current context
6. Case Studies
a. Case Study 1: The Phalanx CIWS
b. Case Study 2: Kargu-2 Autonomous Loitering Munition
7. International Responses and Previous Actions
a. Conventions, Legislation, and Forums
b. Collective Operations
c. Sanctions and Retaliatory Measures
8. Potential blocs & positions in committee
a. High Military Capability Nations (HMCNs)
b. Medium Military Capability Nations (MMCNs)
c. Low Military Capability Nations (LMCNs)
9. Possible solutions
10. QARMAs
11. Position Paper guidelines
12. Bibliography
1.
LETTER FROM SECRETARY GENERAL
It is with great pleasure and excitement, that I warmly welcome all of you to the second edition of the
Delegate of the Raimondi Team and now Secretary General of RAIMUN 2024. I am truly grateful for this
position and for being able to host such a large-scale conference. Our dedicated Secretariat and our talented
team of staff and faculty have worked tirelessly to bring you the best conference possible. Growing up in a
household with both lawyer parents, our family conversations naturally evolved into lively debates,
particularly during dinners when we gathered to discuss current political events. These moments, filled with
spirited dialogue, remain the fondest and most cherished memories that I have with my family. It wasn't
until 2019 that I had the opportunity to apply the skills honed during these conversations.
At the age of twelve, I enrolled in in my first Model United Nations (MUN) experience. Despite my initial
lack of familiarity with the intricacies of the MUN format, my enthusiasm and research helped me not only
navigate the conference successfully but also garner an award. But beyond the accolades, what truly
resonated was my growing ambition to further immerse myself in MUN, taking every opportunity available
to better my skills.
As Secretary-General, I aspire to provide each participant, regardless of their level of MUN experience,
with an enriching and transformative experience. My own journey in MUN has significantly shaped my
character and I take pride in discovering a space where my extroverted, witty, and dedicated personality
can be expressed. I also appreciate the opportunity to be surrounded by fellow future leaders and bright
minds.
I extend a warm invitation for each delegate to set a personal goal before the conference starts, whether it
be forging new friendships, refining their negotiation, and speaking skills, or simply enjoying the
experience. Embrace the challenge and encourage both yourself and fellow delegates to overcome any
obstacles that may arise during committee sessions. We have diligently worked to ensure transparency and
fairness are present in every aspect of the conference. Should you have any inquiries or concerns, do not
hesitate to reach out to me through the channels provided in the Handbook.
Kind Regards,
2.
LETTER FROM THE COMMITTEE DIRECTOR
My name is Valerie Delgado, and I am beyond excited to be your committee director. I am an undergraduate
student at McGill University in Montreal, pursuing a Major in Political Science, and a double Minor in
Economics and Philosophy. I attended my first Model UN conference in late 2016, making this my 8th year
in MUN! Throughout these years, I have had the chance to organize, chair, and compete at over 50
conferences. What I most deeply cherish about Model UN truly is the diversity of perspectives that you can
encounter, and the people you meet on the way. I really hope to make this committee a space where you
get to learn and have fun.
While on my gap year in Peru, I was a member of the Peruvian Debate Society, with whom I attended two
Harvard conferences. Now at McGill, I am the Director of General Assemblies, ECOSOCs and Specialized
Bodies for the McGill Delegation, training and preparing our delegates for conferences across North
America. Additionally, I was recently elected as the Secretary-General of McMUN 2025 - the 2nd largest
conference in North America, and best in Canada! Aside from Model UN, I really enjoy playing sports, and
I love trying out new recipes! A fun fact about myself is that I have studied a concerning number of
languages. I am fluent in Spanish, English and Italian, I took German for four years, Russian for a semester,
and I am currently learning French.
As an alumna from Antonio Raimondi, I am thrilled to be joining the Dais team for RaiMUN 2024. I had
the pleasure of being the Undersecretary General for RaiMUN 2018 - the first, and until now only edition
of RaiMUN! - and I am so glad this amazing conference is coming back to the circuit. As the former Head
Delegate of the Raimondi Delegation, I have seen many of the now secretariat members grow as delegates
and individuals, and I am extremely proud of their achievements and the passion and dedication that they
are bringing to this team and conference.
Regarding our committee and my evaluation criteria, I want to encourage you to address this topic with
openness, creativity, and a lot of respect and consideration. I will value well-researched initiatives that are
feasible, relevant, and innovative. Do your best to be a positive leader: rather than speaking the loudest or
the most, be a delegate that moderates conversations and ensures everyone feels valued and comfortable
sharing their ideas. For me, the most important thing to evaluate aside from engaging speeches and
substantial research is your disposition to collaborate, and the team environment that you can create.
I wish you the best of luck preparing for the conference, and please do not hesitate to reach out to us if you
have any questions!
Best regards,
Valerie E. Delgado
Committee Director
valerie.delgado@mail.mcgill.ca
3.
LETTER FROM COMMITTEE ASSISTANT DIRECTOR
Dear delegates,
My name is Alejandro Abuin Siret, and I am deeply excited to welcome you all to the DISEC committee
of this RaiMUN edition!
I cannot begin introducing myself without briefly mentioning my roots. Although practically grown in
Lima, I was born in Cuba, a place where international relations and the matter of human rights depict an
everything-but-ordinary landscape. Such circumstances triggered my interest in international politics and
my consequent love for Model UN, having attended my first conference a couple of years ago in high
school. Ever since, I have attributed a great part of my personal growth to the inherent demanding nature
I am currently a member of the Peruvian Universities (PU) debate team, with whom I have just taken part
in Harvard World Model United Nations 2024 -known as the Olympics of Model UN-, held in Taipei City,
Taiwan this year. There, I obtained the highest commendation: a Diplomacy Award, fueling my motivation
for such a fruitful activity. Adding to my international debating experience, I have also attended Harvard
National Model United Nations - -
MUN career, also achieving the highest commendation for the conference: a Best Delegate award.
Having just begun my gap year before college, I am deeply convinced about studying economics, posing a
special interest in the subjects of monetary policy and shadow economies. In my free time, I mostly end up
hanging out with my friends, whom I can't spend more than two days without seeing! I also really enjoy
baking and
With not much to add, I hope that you deeply research our topic, and above all, that you enjoy the journey!
Looking forward to meeting you at the conference, I wish you the best of luck in your preparation process,
during which you can feel free to contact us through the committee mail.
Kind regards,
Alejandro Abuin
Committee Co-Director
alejandroabuin.dgh@gmail.com
4.
INTRODUCTION TO THE TOPIC: AUTONOMOUS WEAPONS AND A.I. IN WARFARE
Technology in warfare and military ordeals has always been a preoccupation for humanity. Ever since the
industrial revolution, fast-paced changes in technology have only made themselves quicker and more
efficient, and this natural fear and concern is not new, as we can reference the scandal that atomic bombs
caused in 1945 and the global-
population as past examples of collective fright. However, this fear has never been more alive than in the
twenty-first century, with the implementation of two very advanced and useful, yet dangerous tools: AI and
autonomous weapons. As representatives for the international community, delegates must engage in
complex and thorough dialogue, development of adaptable frameworks to each nation and section of the
and the choices made today will have profound implications for the future of warfare and the very nature
of armed conflict.
When unregulated, AI can pose a great risk when used in conflicts. The main concern at hand is the fact
that, when a weapon works on its own, it lacks the human conscience to acknowledge when and where it
must be activated. When something, even if it was made by humans, possesses a trait only humans are
meant to have, it escapes from their own hands whether it generates a negative or positive impact. Its skill
of targeting, conflict escalation and indiscrimination towards attacks is what renders AI-powered and
autonomous weapons the most dangerous type of armament.
Real life examples of these risks are the development and deployment of loitering munitions, also known
An example of the dangerous use of suicide drones are all the U.S drone strikes
throughout the Middle East, which made up an approximate of 10,888-20,823 casualties in total. Suicide
drones were just the beginning, as of 2024, AI and autonomous warfare has become faster, better, and
stronger, and as DISEC, the Disarmament and International Security Committee, we cannot undo what has
been done in the past, but we can prevent further damage in the future. As of now, there are around 110
global armed conflicts being monitored and DISEC's role in the de-escalation of these is crucial, since it'll
not only bring together different stakeholder perspectives, but also establish a framework that truly reaches
the middle ground, building consensus and using verification methods in its resolutions to ensure long-
lasting and effective solutions to the world's most urgent matters.
5.
HISTORY AND POWERS OF THE COMMITTEE
DISEC, also known as the Disarmament and International Security Committee or the First Committee, is
one of the six main committees of the United Nations General Assembly, established in 1945, since the de-
escalation of global conflicts became a pressing issue in the aftermath of World War II. Well into the
twenty-first century, DISEC has changed throughout the 79 years of its existence with the changes of
warfare technologies, but something that has never changed is DISEC's main task: addressing emerging
security threats through international cooperation. The second United Nations Secretary General and Nobel
but to save humanity from hell and these powerful words align perfectly with the Disarmament
committee's goal.
In 1992, the Disarmament Commission was renamed the First Committee of the United Nations General
Assembly, becoming one of its main six committees. In this new role, DISEC continued to focus on
disarmament and international security, but with a particular emphasis on conventional weapons.
Since its inception, DISEC has been instrumental in promoting disarmament and non-proliferation
initiatives, including the Treaty on the Non-Proliferation of Nuclear Weapons, the Chemical Weapons
Convention, and the Biological Weapons Convention. The committee has also been involved in discussions
on the role of small arms and light weapons in promoting conflict and instability, and on the disarmament
of landmines and explosive remnants of war.
As the global security landscape is constantly evolving, DISEC continues working to address the
multifaceted challenges that fall within the scope of the United Nations Charter and threaten the stability
of the international community, and the powers that the committee possesses allows it to do so properly.
Even though DISEC does not have the legally binding powers the United Nations' Security Council has, it
has the capability of recommendations, reporting, coordinating with other UN bodies and setting the agenda
for the entire UN system. Its influence greatly stems from the controversy and sensitivity of the topics that
are discussed in disarmament committees, and it shapes the discourse that surrounds it.
Despite the lack of a clear, agreed-upon definition regarding LAWs, they most typically refer to
weapon systems that, once activated, can function without the need of human intervention1.
6.
emerging, with IAI Harop Loitering Munitions developed by the Israel Aerospace Industry being
invented around this time, even though not fully autonomous yet, an
that LAWs started to reach new greatness in technologies, and new fears in the population, with
the drone strikes in Northern Pakistan and throughout the Middle East that were previously
mentioned.
As of now, the use of LAWs during armed conflicts is no longer just a matter of the International
Community, but the ethical and moral concerns that come with it are accessible to the whole
population, although seldom objective and fully unbiased. Nowadays the discourse is very much
alive and present, especially due to the fact that the State with the most advances in LAWs and AI
in warfare is involved in a conflict that has a high probability in escalating into a global dispute,
thus, now is the time to properly discuss it, analyzing it from diverse perspectives.
The incorporation of artificial intelligence into warfare is a recent phenomenon. Aside from its
possible usage in autonomous weapons, machine learning systems are also beginning to be utilized
in warfare, particularly in cyber and information operations, and within military decision support
systems.
These latter two usages have gained less international attention, as its impacts are not as evident
and drastic as those of LAWs. However, AI can play a fundamental role in collecting, analyzing,
and combining data to then generate recommendations for military operations, or even predictions
about possible scenarios based on an assessment of patterns of behavior, and the identification of
people or objects within a territory.
On the one hand, utilizing AI to analyze drone footage, or other intelligence streams, can provide
better information to military services and support human decision-making processes, perhaps
facilitating the compliance with international humanitarian law and minimizing the vulnerability
of civilians. On the other hand, incorporating AI into military intelligence can exploit the
vulnerabilities of an opponent, as AI systems can, through machine learning, serve this exact
purpose.
With the emergence of AI, a new form of warfare has also emerged: information warfare.
Information warfare has been part of most conflicts throughout history, yet AI systems have
enabled the production of fictitious content that can be mistaken for genuine information. More so
than identifying vulnerabilities within information and cyber systems, or analyzing field footage
from conflict zones, AI can spread misinformation and diffuse certain narratives to an extent that
we have not yet seen.
7.
In the present day, however, this remains purely a latent threat. Artificial intelligence is currently
aiding militaries to process information faster and more efficiently. In Ukraine, for instance, it is
being utilized to observe satellite images and drone footage to analyze what is occurring on the
battlefield, informing human decisions, and ensuring their activities do not incur in civilian
casualties, and generate the least possible amount of damage.
With time, all ever-evolving technologies eventually improve themselves until they reach their
peak. We have been hearing for a long time from different news outlets and media forms that LAWs
and AI in warfare have but as of 2024, both of these systems are
present in the context of military ordeals.
Some of the current forms of LAWs and AI are autonomous drones, AUVs (autonomous
underwater vehicle), UGVs (unmanned ground vehicles), loitering munitions, algorithm-based
missile systems and data processing and strategic decision-making AI technologies. These are just
a few, varieties of every category of LAWs or AI exist in different parts of the world, but overall,
these kinds of weapons have shifted from the old, nearly primitive submarines from World War II.
With regards to artificial intelligence, systems such as algorithms can be used to minimize the risk
of civilian casualties. Machine learning systems, facial recognition technologies, and object
identification capacities can aid these autonomous systems target only those involved in the
conflict, and military equipment, while conducting an evaluation of the risk of civilian casualties
in an area. Additionally, they can aid military personnel measure risk in a more efficient manner,
for the same purpose, reducing the likelihood of casualties and engaging a safer form of
confrontation.
However, the problems with the usage of Ai in warfare systems are mainly two. Firstly, just like a
device can be trained to identify military personnel, it can also be trained to target any specific
group. Discriminate targeting is a double-edged sword, which can both protect and harm specific
groups.
Currently, LAWs are used for offensive operations (the type of operations that are mostly covered
by the media) such as targeted strikes, defensive operations, for example, suppression of enemy
offenses and force field operations, and force protection functions, for instance, sensors and
cameras to rule out any security threats.
8.
The most significant recent development has been the widespread use of generative AI, and the
improvement of machine learning systems, which can interact with real life, as well as with video,
photo, and audio inputs, and conduct human-like analysis. Its current usage is mainly dedicated to
enhancing warfare systems, introducing an intelligent approach and more independence to thesis
systems, to reduce the dependence on human operation.
CASE STUDIES
The Phalanx CIWS is a clear example of the advantages that artificial intelligence can bring to warfare
technology. Developed in sponsorship from the United States, this is a rapid-fire, computer-controlled,
radar-guided gun system utilized to defeat threats both on land and at sea.
Utilized primarily by navies, this system has the capacity to shut down anti-ship missiles, and it is installed
on all US Navy surface combatant ships, as well as on those of the 24 nations allied to the United States.
On land, the system is used to identify and counter rockets, artillery, and mortar systems.
The main advantage of the Phalanx CIWS is that it combines multiple systems into a single one, being
equipped with advanced sensors, such as radar and electro-optical trackers to detect and track threats with
precision. This system integrates search, detection threat evaluation, tracking engagement and target
technologies.
Once the threat is detected and classified as hostile, the Phalanx CIWS autonomously targets the threat,
intercepting and destroying the object before it reaches its intended target. The entire process is extremely
agile, occurring within a matter of seconds which minimizes the risk of harm to the defending ship,
equipment, and crew.
The usage of autonomous lethal weapons systems like the Phalanx CIWS, intended to defend and protect
rather than to harm, as it acts only when the detected threat is classified as hostile, can be utilized to protect
non-target zones during times of conflict, minimizing casualties and reinforcing the compliance with
humanitarian law. Overall, this system can enhance the safety of military and security operations and can
be responsibly used to deter conflict and safeguard civilians and combatants.
The effectiveness of this system is truly outstanding, and it has become a crucial equipment for the
realization of military operations. For instance, in the case of the US involvement in the Yemen conflict
and their counter operations, the Phalanx CIWS has protected their ships from Houthi missiles detected as
near as 1 mile from the military ships. This system, as observed in this case, can facilitate counter-terrorism
operations, rescue missions, and safeguard the delivery of humanitarian aid - alleviating rather than
exacerbating the effects of armed conflict, and functioning in a defensive capability.
9.
The Kargu-2 Autonomous Loitering Munition is an unmanned aerial vehicle (UAV) form of LAW
developed by the Turkish defense company STM Defense Technologies Engineering and Trade Inc. This
UAV pertains to the category of loitering munitions, which are designed to loiter in a target area in search
of a suitable target until its detection. Once detected and confirmed by the Kargu-2, this system typically
operates in a kamikaze-style, crashing into the target to detonate an explosive load.
The system is equipped with similar devices as the Phalanx CIWS, including electro-optical sensors and
infrared cameras, artificial intelligence algorithms to detect, classify and engage targets, as well as systems
that difficult the detection of the system, and facilitate the transportation and detonation of explosives. The
Kargu-2 has the capacity to perform fully autonomous navigation through its flight control system and can
be used for both mobile and static targets, meaning it requires no human control or assistance at any point
of the operation process. Additionally, its portability and ease of deployment renders it a particularly useful
and concerning vehicle, as it can be launched from ground vehicles, ships, or multiple forms of aircraft.
This means the system can be used in most forms of armed conflict, with very little limitations.
Due to its high capacities and offensive nature, the usage of this type of systems raises both legal and ethical
concerns. A particular case of concern dates back to March 2021, when a UN Panel of Experts in Libya
reported the possible usage of this system in the context of the interim Government of National Accord
Affiliated Forces (GNA-AF) in Libya, with the GNA-AF utilizing the device with support of Turkish
military to target the Haftar Affiliated Forces (the rival branch, for simplicity).
This case raised concerns about the capability of these devices to operate in compliance with humanitarian
law regarding hostilities and the protection of civilians. We have selected the Kargu-2 as our case study
because of its high level of precision. Within the category of autonomous loitering munitions, the Kargu-2
is one of the few devices that count with a machine learning algorithm designed specifically to identify
legitimate military targets, only targeting military personnel. Other systems, such as the Israeli Harpy
loitering munition, have less precise algorithms, and end up detonating in civilian areas, and creating large
collateral damage.
Even in the case of the Kargu-2, however, it is uncertain whether the system can truly, and in all cases,
differentiate between civilians and combatants. In the context of a non-international armed conflict, as was
the case in Libya in the mentioned report, organized armed groups might be wearing civilian clothing, and
civilians, on the other hand, might be carrying small weapons for personal defense in high-risk areas, which
complicates the efficiency of object classification systems. In the case of lethal autonomous weapons
utilized for offensive operations, the ethical concerns are much higher, as the provisions to ensure correct
target detection and safeguard civilians can only go so far.
10.
The most widely known convention regarding weapons (and warfare in general) is the Geneva
Convention, established in 1949. According to the Geneva Convention, weapons that can cause
unnecessary harm and weapons that involve indiscriminate attacks are prohibited, those are merely
some key aspects of the treaties that compose the Geneva Convention, but they are the ones that
the purpose of the convention, and discourse was born with two clear sides on the issue, nations
opposed to LAWs and AI, and nations that support looser regulations towards these. As the DISEC
committee, a body meant to regulate the proliferation of conventional weapons, this topic has been
part of the DISEC agenda time and time again. On October 12th, 2023, the First Committee
approved two resolutions, both included the pressing need for the topic to be discussed and for all
points of view to be considered. Overall, the international community is preoccupied with the
backlash and consequences that LAWs and AI in warfare might bring to the already shaken global
security scene.
Perspectives differ on the issue across the globe, most prominent military powers like the United
States of America, the Russian Federation, Israel, and China lean towards a less radical prohibition
on all LAWs, while most neutral countries and bodies agree on the prohibition of these. Even the
current Secretary General of the United Nations has been insisting on the ban on LAWs since 2018.
Groups like the European Parliament call for a complete prohibition of Autonomous weapons and
AI in warfare, but the U.S Department of Defense disagrees that a complete ban on the mentioned
military technologies, asserting that norms and regulations were necessary to ensure proper human
judgment in operations that involve AI and LAWs, but that the development of these is not
inherently an immoral thing.
one clear legislation as of May of 2024, and most organizations (including the United Nations and
the First Committee) have been calling for a binding legislation that for once puts an end to all the
ethical dilemmas and loopholes that military powers keep taking advantage of.
Conventions and forums have been held in Geneva, UN headquarters, Parliaments, and NGO
events, yet they never truly have gotten to the point of negotiation between nations that abstain to
vote for resolutions or vote against resolutions to ban LAWs and AI in warfare and nations that
agree that LAWs and AI are where the line is drawn between sovereignty of military advances and
violations of humanitarian law. All in all, this simply demonstrates how important international
dialogue and cooperation are to bui
problems.
b. Collective Operations
Some examples of operations that have been done by different organizations are the NATO
Artificial Intelligence Strategy, that aimed towards protection against misuse of AI (including in
gagement in the assessment
humanitarian impact of LAWs (with the Red Crescent Movement), its effort to inform the general
public on the topic with several research papers and articles and its spaces for collaboration with
different stakeholder representants to share experiences and encourage further discussion on the
11.
benefits and downsides of military technologies. Other non-profit organizations such as Human
of national security, and there are six kinds of sanctions the United Nations can impose: diplomatic
sanctions, travel bans, freezing of assets, commodity interventions or financial sanctions. For
example, the Islamic Republic of Iran has been subject to various sanctions since 1979 up to the
present day because of its support for terrorist groups, its violation to the Nuclear Non-Proliferation
Treaty and accounts of human rights violations. A sanction could be issued towards a country on
the account of misusing AI or LAWs in warfare, yet no true legislation or international laws exist
towards the subject matter, so the only retaliatory measures that these military powers receive have
been the rising political tensions in the international negotiation and discussion scene. As a matter
tates
Each category may share commonalities in technical indicators, but there may be differences in
policy that delegates should clearly communicate during the beginning of the conference. When
articulating their country policy and crafting solutions, delegates should take into account their
a. High Military Capability Nations (HMCNs): include the countries that have up to date
deployed LAWs (China, Israel, South Korea, Russia, the United Kingdom, and the United
States), as well as countries close to obtaining them. They are characterized by abundant
12.
defense spending, existing projects in the integration of AI and ML into their forces, and a
general reluctance to outline specific policy on autonomous weapons. Because of the
advantages that AI gives to military functions, as well as a lack of moral clarity surrounding
certain applications of the technology, HMCNs are more likely to seek international
consensus on very baseline principles that create some guidelines for future cases but leave
regulation mostly interpretable to individual cases. To summarize, High Military
Capability Nations are likely in joint development projects with other High Capability
Nations or Medium Military Capability Nations.
b. Medium Military Capability Nations (MMCNs): although often engaged in joint
development contracts with HMCNs to access their military expertise, export their
domestically produced weapons, and strengthen regional cooperation against external
aggressors, they are differentiated from HMCNs by their decreased level of defense
spending, consistent and demonstrated interest in maintaining or incrementally improving
military capabilities, and mixed policies on LAWS regulation. MMCNs have a less marked
common position; they can either be in favor of a complete and total ban, or they may be
more inclined to favor a vaguer international standard by following the footsteps of
HMCNs. Therefore, delegates representing MMCNs nations are encouraged to deeply
research their respective policy in order to better grasp their inclinations.
c. Low Military Capability Nations (LMCNs): are generally characterized by a vehement
opposition to the development of lethal autonomous weapons resulting from their relatively
low defense budgets accompanied by little or no existing artificial intelligence
infrastructure. Because these countries lack the financial and human capital, and regional
relationships to build strong AI infrastructure, LMCNs are more likely to be in favor of a
ban on LAWS. This is because they generally would like to slow down the pace of
technological advancements in certain unattainable areas, and they have less influence over
international regulations on weapons development. Alone, they do not count with high
leverage or bargaining power; however, groups of LMCNs, which often happen to be less
developed nations, are prone to unite under a single call to ban the development of LAWS,
which yields them better results than individualized approaches.
POSSIBLE SOLUTIONS
Regardless of the level of laxity or strictness that delegates may defend for it, all nations should be interested
in developing a clear common ground on the guidelines for the ethics behind the development of Lethal
Autonomous Weapons (whether it means a ban, or any sort of treaty), as the development of such accords
represent the core of our debate given the heavy lack of LAWs international regulation. Exemplary, the
13.
has expressed the need for a legally binding document that all states should follow that prohibits all forms
of LAWs by 2026, according to the 2023 New Agenda for Peace.
Although such a total ban may seem unattainable once considered the decision making power settling of
the only UN body with binding resolutions: the Security Council, it is still possible and will be deeply
appreciated by the dais to instrument diplomacy and inter-bloc dialogue to clearly accord more widely-
agreeable general common grounds.
QARMAS
1. What position should the UN adopt with regards to LAWs regulation? Should we aim for
a total ban, or for a minimum common regulation? If the latter, how strict should it be?
2. How can the UN create legislation that is continuously applicable to emerging cases,
especially in light of the pace of technological development?
3. How can the practical details of carrying out the resolution be tackled? For example, if a
weapon is deemed to be in violation of established regulations, how should we tackle
non-compliance from parties involved in creating/using these newly prohibited weapons?
4. Should the gap between high and low military capability countries be bridged to enhance
the safety of LMCNs? If so, how?
5. Should the regulation for LAWs be different in the cases of defensive and offensive
systems?
deadlines.
BIBLIOGRAPHY
14.
6. Jones, T. (2021, November 22). Real-Life Technologies that Prove Autonomous Weapons are
Already Here. Future of Life Institute. https://futureoflife.org/aws/real-life-technologies-that-
prove-autonomous-weapons-are-already-here/
7. 11 Principles on Lethal Autonomous Weapons Systems (LAWS). France Diplomacy Ministry for
Europe and Foreign Affairs https://www.diplomatie.gouv.fr/en/french-foreign-policy/france-and-
the-united-nations/multilateralism-a-principle-of-action-for-france/alliance-for-
multilateralism/article/11-principles-on-lethal-autonomous-weapons-systems-laws
8. Minor. (2023, May 2). Laws for LAWS: Towards a treaty to regulate lethal autonomous weapons.
https://reliefweb.int/report/world/laws-laws-towards-treaty-regulate-lethal-autonomous-weapons-
february-2023
9. Military Applications of AI in 2024- Sentient Digital, Inc. (2024, March 15). Sentient Digital, Inc.
https://sdi.ai/blog/the-most-useful-military-applications-of-ai/
10. Autonomous Ground Vehicles - AGVs | Autonomous Robotics. (2023, October 20). Unmanned
Systems Technology. https://www.unmannedsystemstechnology.com/expo/autonomous-ground-
vehicles/
11. Israel using Gaza to test weapons powered by AI software in its air, naval systems.
https://www.aa.com.tr/en/middle-east/israel-using-gaza-to-test-weapons-powered-by-ai-software-
in-its-air-naval-
systems/3141309#:~:text=In%20Sept.,as%20a%20%E2%80%9Csuicide%20drone.%E2%80%9
D
12. E. (2024, March 14). Loitering Munitions - Uvision. Uvision. https://uvisionuav.com/loitering-
munitions/
13. Autonomous Weapon Systems | Law and the Future of War - Law School - University of
Queensland. https://law.uq.edu.au/research/future-war/autonomous-weapon-systems
14. Shaw, M. (2024, April 19). Geneva Conventions | International Humanitarian Law, Protections &
History. Encyclopedia Britannica. https://www.britannica.com/event/Geneva-Conventions
15.
15.
21. Wareham, M. (2023, March 28). Stopping Killer Robots. Human Rights Watch.
https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-
autonomous-weapons-and
22. U.S. Endorses Responsible AI Measures for Global Militaries. U.S. Department of Defense.
https://www.defense.gov/News/News-Stories/Article/Article/3597093/us-endorses-responsible-
ai-measures-for-global-
militaries/#:~:text=These%20guidelines%20include%20ensuring%20that,applications%20underg
o%20senior%2Dlevel%20review.
23. The Prohibition of Lethal Autonomous Weapon Systems. (2023, November 21). European Greens.
https://europeangreens.eu/resolutions/prohibitionlethal-autonomous-weapon-
systems/#:~:text=Its%20member%20parties%20to%20take,international%20law%20and%20ethi
cal%20principles.
24. International Security and Lethal Autonomous Weapons. EEAS.
https://www.eeas.europa.eu/eeas/international-security-and-lethal-autonomous-weapons_en
25. Gill, A. (2018, April 6). A UN forum in Geneva gets to grips with concerns about autonomous
weapons. https://www.linkedin.com/pulse/un-forum-geneva-gets-grips-concerns-autonomous-
weapons-amandeep-gill
26. McElroy, D. (2024, February 12). Cheap drones are killing the UN sanctions regime. The National.
https://www.thenationalnews.com/opinion/comment/2024/02/12/cheap-drones-are-killing-the-un-
sanctions-regime/
27. Lewis, J. The Case for Regulating Fully Autonomous Weapons.
https://www.yalelawjournal.org/comment/the-case-for-regulating-fully-autonomous-weapons
28. Taddeo, & Blanchard. (2022, August 23). A Comparative Analysis of the Definitions of
Autonomous Weapons Systems. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9399191/
29. Congressional Research Service. (2024, February 1). Defense Primer: U.S. Policy on Lethal
Autonomous Weapon Systems. Defense Primer: U.S. Policy on Lethal Autonomous Weapon
Systems
30. International Committee of the Red Cross. (2023, October 6). What you need to know about
Artificial Intelligence in Armed Conflict. https://www.icrc.org/en/document/what-you-need-
know-about-artificial-intelligence-armed-conflict
31. PBS News. (2023, July 9). How militaries are using artificial intelligence on and off the
battlefield. https://www.pbs.org/newshour/show/how-militaries-are-using-artificial-intelligence-
on-and-off-the-battlefield
32. Raytheon. Phalanx Weapon System. https://www.rtx.com/raytheon/what-we-do/sea/phalanx-
close-in-weapon-system
33. Lieber Institute. (2021, June 10). The Kargu-2 Autonomous Attack Drone: Legal & Ethical
Dimensions. https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/
34. STM. Kargu: Combat Proven Rotary Wing Loitering Munition System.
https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav
35. Lendon, Brad. (2024, February 2). CNN. A Houthi missile was just seconds from hitting a US
16.
https://edition.cnn.com/2024/02/02/middleeast/phalanx-gun-last-line-of-defense-us-navy-intl-
hnk-ml/index.html
36. International Committee of the Red Cross. (2023, October 24). Algorithms of war: The use of
artificial intelligence in decision making in armed conflict. https://blogs.icrc.org/law-and-
policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-
conflict/
37. Bellingham, J. (2009). Autonomous Underwater Vehicle.
https://www.sciencedirect.com/topics/earth-and-planetary-sciences/autonomous-underwater-
vehicle
38. Sentient Digital Inc. The Most Useful Military Applications of AI in 2024 and beyond.
https://sdi.ai/blog/the-most-useful-military-applications-of-ai/
17.