0% found this document useful (0 votes)
86 views14 pages

Introduction

Uploaded by

forojo1176
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views14 pages

Introduction

Uploaded by

forojo1176
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

## Introduction

Israel's utilization of artificial intelligence (AI) in warfare has become a pivotal aspect of its military
strategy, particularly in the context of ongoing conflicts with adversaries, including those aligned with
the so-called "Axis of Resistance." This term generally refers to a coalition of groups and states
opposed to Israeli interests, including Hezbollah in Lebanon, various Palestinian factions, and Iran.
This research paper explores the integration of AI technologies in Israel's military operations,
examining its implications for warfare dynamics, strategic advantages, and ethical considerations.

## The Evolution of AI in Warfare

### Historical Context

The integration of AI into military operations is not a novel concept; however, its application has
accelerated significantly in recent years. Israel has historically been at the forefront of military
innovation, driven by necessity due to its geopolitical environment. The adoption of AI technologies
can be traced back to earlier military doctrines that emphasized network-centric warfare (NCW),
which relies on advanced information technologies to enhance situational awareness and
operational effectiveness[1][2].

### Current Developments

Recent advancements in AI methodologies have transformed the nature of warfare. These


developments include automated decision-making systems, enhanced surveillance capabilities, and
precision targeting algorithms. For instance, Israel has implemented AI in its Iron Dome missile
defense system, which utilizes sophisticated algorithms to identify and intercept incoming threats
effectively[3]. Furthermore, AI is employed in drone operations, enabling real-time data analysis and
autonomous flight capabilities.

## Israel's Conflict with the Axis of Resistance

### Overview of the Axis of Resistance

The Axis of Resistance comprises Iran, Hezbollah, Hamas, and other militant groups that oppose
Israeli policies and actions. This coalition presents a multifaceted threat to Israel, employing
asymmetric warfare tactics that challenge conventional military responses. The integration of AI into
Israel's defense strategy is crucial for countering these threats effectively.
### AI Applications Against Adversaries

1. **Intelligence Gathering**: AI technologies are instrumental in enhancing intelligence capabilities.


Israel employs machine learning algorithms to analyze vast amounts of data from various sources,
including social media and communications intercepts. This capability enables the identification of
potential threats and the forecasting of enemy movements.

2. **Cyber Warfare**: Cyber operations have become a critical front in modern warfare. Israel's
cyber capabilities are bolstered by AI-driven tools that facilitate offensive and defensive operations
against adversaries' digital infrastructures. The use of AI allows for rapid adaptation to evolving cyber
threats[4].

3. **Combat Operations**: In combat scenarios, AI assists in optimizing resource allocation and


mission planning. For example, during conflicts with Hamas, Israel has utilized AI to coordinate
airstrikes more effectively by predicting civilian movement patterns and minimizing collateral
damage[8].

## Statements from Israeli Officials

Israeli officials have publicly acknowledged the significance of AI in enhancing national security. For
instance, former Prime Minister Naftali Bennett stated that "AI will play a central role in our military
strategy," emphasizing its potential to provide Israel with a decisive edge over its adversaries[6].
Additionally, IDF Chief of Staff Aviv Kochavi highlighted the importance of integrating technology into
military operations, noting that "the future battlefield will be shaped by artificial intelligence" [7].

## Ethical Considerations

The deployment of AI in warfare raises profound ethical questions regarding accountability and
decision-making. Critics argue that reliance on autonomous systems could lead to unintended
consequences and civilian casualties. Professor Stuart Russell has voiced concerns about the moral
implications of using AI in combat scenarios, advocating for strict regulations on autonomous
weapon systems[2].

### Balancing Innovation with Ethics


Israel faces the challenge of balancing technological innovation with ethical considerations. The IDF
has established guidelines for the use of AI in military operations to ensure compliance with
international humanitarian law. However, as technology continues to evolve rapidly, maintaining this
balance will require ongoing dialogue among military leaders, policymakers, and ethicists.

## Future Implications

### Strategic Advantages

The continued integration of AI into Israel's military framework is likely to yield significant strategic
advantages. Enhanced situational awareness through predictive analytics can lead to more informed
decision-making processes during conflicts. Moreover, AI's ability to process large datasets can
improve operational efficiency and effectiveness.

### Potential Risks

Despite these advantages, there are inherent risks associated with the militarization of AI
technologies. The potential for escalation in conflicts due to miscalculations or system failures poses
a significant concern. Furthermore, as adversaries also adopt similar technologies, there is a risk of
an arms race in autonomous weapon systems.

## Conclusion

Israel's use of artificial intelligence in warfare represents a transformative shift in military strategy
amidst complex geopolitical challenges posed by the Axis of Resistance. While AI offers substantial
benefits in terms of operational efficiency and strategic advantage, it also necessitates careful
consideration of ethical implications and potential risks associated with its deployment. As Israel
continues to navigate these challenges, ongoing discourse among military leaders, policymakers, and
ethicists will be essential in shaping the future landscape of warfare.

This research paper has explored key aspects surrounding Israel's integration of AI into its military
operations while highlighting significant statements from officials and addressing ethical
considerations that accompany such advancements.

Citations:

[1] [Link]

[2] [Link]
[3] [Link]

[4] [Link]

[5] [Link]

[6] [Link]

[7] [Link]

[8] [Link]

ntroduction

Israel's integration of artificial intelligence (AI) into its military operations represents a significant
evolution in warfare strategies, particularly in the context of its ongoing conflicts with the Axis of
Resistance, which includes Iran, Hezbollah, and various Palestinian factions. This paper delves into
the specific AI mechanisms employed by Israel, its action plans and policies regarding AI in warfare,
and the criticisms surrounding collateral damage associated with these technologies. By examining
these elements in detail, we can better understand the implications of AI on Israel's military
effectiveness and ethical responsibilities.

The Mechanisms of AI in Israeli Warfare

AI Technologies Employed

1. Machine Learning Algorithms: Israel employs machine learning (ML) algorithms for various
military applications, including predictive analytics for threat assessment and operational
planning. These algorithms analyze vast datasets from intelligence sources, enabling real-
time decision-making.

2. Autonomous Systems: The Israeli Defense Forces (IDF) utilize autonomous drones and
ground vehicles equipped with AI capabilities. For instance, the Harop drone acts as a
loitering munition that can autonomously identify and strike targets based on pre-
programmed criteria.

3. Computer Vision: AI-driven computer vision systems enhance surveillance capabilities. These
systems can process images from drones and satellites to identify potential threats or targets
with high accuracy. The IDF's use of computer vision is particularly evident in urban warfare
scenarios where distinguishing between combatants and civilians is critical.

4. Natural Language Processing (NLP): NLP technologies are used to analyze communications
intercepted from adversaries. By processing large volumes of text data, these systems can
identify patterns and predict enemy actions.

Specific Applications in Military Operations

 Iron Dome System: This missile defense system employs AI algorithms to calculate
trajectories and intercept incoming threats effectively. The system can distinguish between
projectiles that pose a risk to populated areas and those that do not, thereby optimizing
interceptive responses.
 Cyber Warfare: Israel has developed sophisticated cyber capabilities that leverage AI to
conduct offensive operations against adversaries' digital infrastructures. Cyber units within
the IDF utilize AI for threat detection, intrusion prevention, and automated response
strategies.

 Targeting Systems: Israel's targeting systems incorporate AI to enhance precision strikes


while minimizing collateral damage. These systems analyze real-time data to assess target
viability and civilian presence before executing strikes.

Israel's Action Plans and Policies Regarding AI

National Defense Strategy

Israel's national defense strategy emphasizes technological superiority as a cornerstone of its military
doctrine. The government has invested significantly in research and development of AI technologies
through partnerships with private tech firms and academic institutions.

1. Defense Innovation Authority (DIA): Established to promote innovation within the defense
sector, the DIA plays a crucial role in fostering collaboration between military entities and
technology startups focused on AI applications.

2. AI National Program: In 2020, Israel launched an initiative aimed at enhancing its capabilities
in AI across various sectors, including defense. This program focuses on integrating AI into
military training, operational planning, and battlefield management.

3. International Collaboration: Israel actively collaborates with allied nations, particularly the
United States, on developing advanced military technologies that incorporate AI. Joint
exercises often involve testing new AI-driven systems to improve interoperability.

Ethical Guidelines and Oversight

In response to concerns over collateral damage arising from AI use in warfare, Israel has established
ethical guidelines governing the deployment of autonomous weapons systems:

 Rules of Engagement: The IDF adheres to strict rules of engagement that require verification
of targets before engagement to minimize civilian casualties.

 Accountability Measures: There is an emphasis on human oversight in decision-making


processes involving lethal force. Senior military officials have stated that while AI can assist in
targeting decisions, final authorization remains with human operators.

Addressing Concerns Over Collateral Damage

Criticism of Collateral Damage

Israel faces significant criticism regarding its military operations' impact on civilian populations,
particularly during conflicts in Gaza where advanced technologies have been deployed:

 Human rights organizations have accused Israel of using disproportionate force that leads to
unnecessary civilian casualties.

 Reports indicate that even with advanced targeting systems, miscalculations can occur due to
reliance on automated systems without adequate human oversight.

Israeli Defense Responses


In response to these criticisms, Israeli officials have articulated several key points:

1. Operational Necessity: Israeli leaders argue that the use of advanced technologies is
essential for maintaining national security against persistent threats from militant groups
that employ asymmetric warfare tactics.

2. Minimization Efforts: The IDF has implemented extensive measures aimed at minimizing
collateral damage, such as pre-strike warnings issued via leaflets or phone calls to civilians in
targeted areas.

3. Transparency Initiatives: In an effort to address international concerns, Israel has initiated


transparency measures regarding its military operations. This includes releasing data on
strikes conducted during conflicts and their outcomes.

4. Technological Accountability: Israeli officials emphasize that while technology enhances


operational effectiveness, it does not absolve commanders from responsibility for their
decisions during combat operations.

Conclusion

Israel's use of artificial intelligence in warfare represents a complex interplay between technological
advancement and ethical considerations amid ongoing conflict with the Axis of Resistance. While AI
mechanisms enhance operational efficiency and strategic capabilities, they also raise significant
concerns regarding collateral damage and accountability. As Israel continues to navigate these
challenges, maintaining a balance between technological innovation and ethical responsibility will be
crucial for its military strategy moving forward. The ongoing dialogue surrounding these issues will
shape not only Israel's future military engagements but also broader discussions about the role of AI
in modern warfare globally.

On the one hand, it can be said that the introduction of AI-based tools to the military sphere can
have value, in terms of improving existing capabilities. On the other hand, the introduction of novel
tools that are not regulated by international law raises considerable legal and moral questions and
exacerbates even more the complexities of warfare – ‘the province of uncertainty’. In an attempt to
contribute to the growing literature evaluating these two sides of the spectrum, and all that lies in
between them, this blog will look into the experience in Israel, as a test case, to reflect on the proper
way to move forward.

As can be learned from the experience of Israel, the global and widespread integration of AI,
exacerbated by generative AI tools, has reached the military domain. In particular, the IDF employs AI
applications in its: (1) Proactive Forecasting, Threat Alert, and Defensive Systems; and (2) Intelligence
Analysis, Targeting, and Munitions. This trend increased during the Israel-Hamas war of 2023-2024,
inviting the consideration of the role of international law in this phenomenon.

Proactive Forecasting, Threat Alert, and Defensive Systems

AI-based tools can detect, alert, and occasionally preempt catastrophic scenarios and contribute to
effective crisis management. As such, just like NATO, the IDF harnesses AI technologies to improve
disaster response (e.g., through analysis of aerial images to identify risks and victims). One notable
system in use is the Alchemist System, which seems to possess both defensive and offensive
capabilities. This system integrates data onto a unified platform and possesses the capacity to
identify targets and promptly inform combatants of threats, like suspicious movements. This system
was deployed already in 2021, during “Operation Wall Guardian”.

Furthermore, the Iron Dome is an Israeli missile defense system known for its life-saving capabilities
in safeguarding critical infrastructure against the threat of rockets launched into the territory of
Israel. In the 2023-2024 Israel-Hamas war, this system allowed the low number of
casualties notwithstanding rocket launches from Gaza, Lebanon and other areas (like Syria and even
Yemen), in the face of a wide range of threats, like drones and other small, low-flying objects.

Another defense system known as “Edge 360” has been developed by Axon Vision. This AI-based
system, installed within armored vehicles currently operational in Gaza, detects potential threats
from every angle and promptly alerts the vehicle operator. Finally, the IDF is also using AI in the
service of border control. The October 7 attack raised several red flags concerning this system, and
highlighted the fact that technological tools can, at the end of the day, only complement human
capacity.

Intelligence Analysis, Targeting, and Munitions

Integrating AI-based tools to analyze high volumes of data is essential to deal with the overwhelming
influx of data that characterizes the modern battlefield. One of the DSS the IDF used is the “Fire
Factory,” which can analyze extensive datasets including historical data about previously authorized
strike targets, enabling the calculation of required ammunition quantities, the proposal of optimal
timelines, and the prioritization and allocation of targets. Operationally, it is an amalgamate of phase
2 (target development) and phase 3 (capabilities analysis) of the targeting cycle.

Another system that has stirred recent controversy is the Gospel, which helps the IDF military
intelligence division improve recommendations and identify key targets. Already in 2021, during
operation “Guardian of the Walls,” the system generated 200 military target options for strategic
engagement during the ongoing military operation. The system executes this process within seconds,
a task that would have previously required the labor of numerous analysts over several weeks.

A third notable system is the “Fire Weaver” system, a novel tool developed by a private company
– Rafael. This networked sensor-to-shooter system links intelligence-gathering sensors to field-
deployed weapons, facilitating target identification and engagement capabilities. The Fire Weaver
system focuses on processing data and selecting optimal shooters for different targets based on
factors such as location, line of sight, effectiveness, and available ammunition. It is aimed at
improving the capacity to work simultaneously with various players operating in concert, in order to
promote precision, minimize collateral damage, and mitigate the risk of friendly fire.

Finally, a recent 972+ media report reported that the IDF deployed an AI system named “Lavender,”
which allegedly played an important role especially during the early stages of the 2023-2024 Israel-
Hamas conflict. This system is designed to mark possible suspected operatives in the military wings
of Hamas and Palestinian Islamic Jihad as a potential target. The report by 972+ report suggested
that human verification in this context was allegedly restricted to discerning the gender of the target,
with an average duration of 20 seconds per target before an attack, and in addition the report stated
that the system was mistaken in approximately 10% percent of cases. If accurate, this is alarming,
and indeed in a recent post, Agenjo suggested that such a scenario raises concerns regarding
potential violations of human dignity through depersonalizing targeted individuals and by
circumventing human involvement in the targeting process.
Still, it should be noted that the process mentioned in the 972+ article is a very preliminary one in
the chain of creating, and authorizing, a military target. This is since the decision made by an
intelligence officer is later delivered to a target room – in which legal advisers, operational advisors,
engineers, and more superior intelligence officers revise the suggested target before approving it
(and, at times, reject them). The use of “Lavender“ is hence limited to the intelligence gathering
phase, after which the suggested insight still needs to be verified in the target room by, inter
alia legal advisers, that will evaluate if a target should be attacked based on considerations of
distinction, proportionality and other applicable IHL rules.

Challenges Associated with AI on the Battlefield

The UN General Assembly has recently expressed its concerns regarding the emergence of new
technological applications in the military domain, particularly those associated with artificial
intelligence, which pose serious challenges “from humanitarian, legal, security, technological and
ethical perspective”. A pivotal concern arises regarding the appropriate level of human involvement
required or necessary in decision-making processes (in/on/off the loop). This concern raises an issue
of importance for three crucial purposes: improving accuracy and precision in decision-making,
enhancing legitimacy, and ensuring accountability. This is especially important in the context of AI
systems deployed for targeting, such as Lavender, the Gospel, Fire Weaver, and Fire Factory.

It seems that, as of today, there is no legal justification for a situation in which an AI system
autonomously targets individuals without human involvement at all, as the threshold placed by IHL is
that of the reasonable military commander (namely, a human commander that is evaluated by
standard which do not apply to a computerized AI-based system). Accordingly, the ICRC noted that
preserving human control and judgment is essential. As per Israel’s approach, currently the IDF
commander is the one holding the ultimate decision-making authority when it comes to targeting.
Notwithstanding, there has been recent criticism regarding the extent and effectiveness of human
involvement in the target selection process by systems like the Gospel and Lavender (see for
example here and here).

In this context, it is worth noting the customary principle of precautions. This principle mandates the
positive obligation for those planning an attack ‘to do everything feasible to verify‘ the military
nature of individuals or objectives. This principle also entails the duty of constant care, which
requires that in the conduct of military operations, constant care shall be taken to spare civilians and
civilian objects.

Indeed, the rapid pace of generation of targets by an AI system, coupled with the commander’s
limited time for a comprehensive review, raises concern that this situation may fall short of the
obligation to exhaust all ‘feasible’ means to avert harm to civilians and may not align with the duty of
constant care. It was suggested in a recent blog published by Opinio Juris that if military personnel
are unable to “delve deep into the target”, it is difficult to see how these systems contribute to
compliance with the principle of precautions and the duty of constant care. The blog further claimed
that in general the speed and scale of production of targets by such AI systems coupled with the
complexity of data processing may make human judgment impossible or meaningless. However, it
should be recalled that the formal IDF’s stance on this matter seems to address this concern. As
noted, the utilization of targeting AI systems by the IDF like the Gospel and Lavender is confined to
the intelligence gathering phase, in the early stages of the “life cycle” of a target, in the sense that
later stages include corroboration and oversight over the intelligence gathering and evaluation
stages, including review by legal advisers, which verify not only the factual assertions made, but also
the appropriateness of an attack in terms of distinction, proportionality, precautions, and other
relevant rules of international law. Indeed, the IDF clarified that the selection of a target for attack by
the Gospel will undergo an additional separate examination and approval by several other
authorities (operational, legal, and intelligence), out of a desire to ensure meaningful human
involvement in targeting decision making processes.

Another concern relates to the explainability issue, or the “black-box” phenomenon. The incapability
of an AI-based systems, as a general and inherent bug of the system when it comes to AI, to provide
a clear and comprehensible explanation of its decision-making processes can hinder investigations of
military incidents, and as such impact accountability and also inhibit on the ability to minimize the
risk recurrent mistakes. In this context, the IDF clarified that regarding the Gospel system, it provides
the intelligence researcher with accessible and understandable information upon which
recommendations are based, enabling independent human examination of the intelligence material.
Another notable challenge, related to this one, is the phenomenon called “automation bias“.
Automation bias refers to the tendency to over-rely, or over-trust, the AI output. While IDF
commanders can choose to disregard recommendations from the Gospel, it is nevertheless
challenging to avoid automation bias, especially during intense hostilities which require decision-
making in an accelerated pace and under constant pressure to act.

The Way Forward – Review of New and Emerging Technologies is a Must

States are limited in their choice of weapons and means or methods of warfare. In order to verify
that new capacities are in line with international law, States are required under Article 36 of the First
Additional Protocol to the Geneva Conventions (AP I) to evaluate new weapons, means or methods
of warfare prior to their deployment in practice. Of relevance for our discussion, “Means of warfare”
is a broad term, extending to military equipment, systems, platforms, and other associated
appliances used to facilitate military operations. It seems that tools deployed for offensive actions
like the Gospel, Fire Factory, Fire Weaver, and Lavender, constitute a new means of warfare that
ought to be subject to a legal review under Article 36.

While Israel is not a party to AP I, and there is a discussion as to the customary status of Article 36, it
is important to recall that General Comment 36 of the Human Rights Committee took the approach
that ensuring the protection of the right to life invites prophylactic impact assessment measures,
including a legality review for new weapons and means of warfare. Such a review should be
conducted in three stages.

First, it is important to determine whether the use olkkf a particular means of warfare is prohibited
or restricted by a treaty or under customary international law. Regarding military AI tools, the State
of Israel has not ratified a treaty specifically prohibiting the use of AI technology in general or in
military applications – as there is none at this present stage. Furthermore, it seems that there is
currently no customary prohibition on the deployment of AI in military contexts (other than general
principles of IHL, like distinction, and specific rules, like ones safeguarding protected sites).

Second, there is a need to determine whether employment of the system might infringe on general
prohibitions under international law (like protection of the environment). AI tools deployed by the
IDF like the Gospel, and Fire Factory, do not infringe directly prohibitions, as they merely support
decision-making processes.

Third, the State must consider the means of warfare in light of the ‘Martens Clause’, which
underscores the need to consider ‘the principles of humanity’ or ‘the dictates of public conscience’.
In the Legality of Threatening or Using Nuclear Weapons, the International Court of Justice affirmed
the Martens Clause’s importance as an “effective means of addressing rapid evaluation of military
technology”.

We suggest prohibiting scenarios in which AI systems make autonomous decisions without human
input. This is required given the fact that the legal standard of behavior in IHL refers to a human
decision maker, and also when considering developments in other fields, and most notably human
rights (where discussion over the right to a human decision maker is on the rise). In particular, we
advocate for the insistence on meaningful human involvement in the utilization of AI systems by the
IDF, which includes, among other measures, a separate approval of another authority in the chain of
command, along with additional examinations in the target room by legal advisers and additional
experts. This mechanism should always be the case, without exceptions. Also, as discussed above, it
is important to try and provide explanations to the people operating the systems, in order to avoid a
“black-box” situation in which it is impossible to supervise the systems and properly question their
suggestions (when needed) and mitigate concerns regarding possible bias as well.

Concluding Thoughts

There is room for prudence when deploying new military capabilities. First, it is critical
to evaluate the legality of new technologies through impact assessment measures – for example
under Article 36 to API. Second, the designers and operators must be conscious of inherent risks like
the lack of explainability and biases. This done not mean that we suggest to attribute criminal
accountability to designers, as the ultimate decision maker is the military commander, but we believe
that the more the designers will understand the way the limitations of the system might inhibit on its
operation, the better they will be able to deal upfront with concern and hopefully also alleviate
them. Training of operators of the system, and those who rely on it, is also key, and it must include
technical, ethical and legal aspects.

In a world that is becoming more divided in ideals and values, the possibility to find a common
ground on the road ahead will be pivotal not only for the battlefield of the future, but also for the
maintenance of international peace and security. We are living through a volatile time, that includes
quantum leaps in terms of technology, and the need to be able to rise above narrow interests and
considerations is as important as it has ever been.

We believe that initiatives like REAIM and the “Political Declaration on the Responsible Military Use
of Artificial Intelligence and Autonomy” should lead the way ahead – to a path of cooperation, of
common ground, and of a future in which States will focus on the welfare of humanity rather than on
dystopian technological-driven violent clashes that will be undertaken without a proper legal
framework or benchmark to regulate them. In order to move forward to the creation of a framework
for the regulation of AI in the battlefield, there is a need to engage not only states but with all
affected stakeholders. The multi-stakeholder approach is rooted in the understanding that the
involvement of all relevant players in a meaningful and transparent way is required to achieve
progress, and ensure legitimacy of norms and institutions. We must recall that development of
technology is a complex sphere of operation, in which there is a division of roles and responsibilities
among different stakeholders, including States, academia, civil society and the private sector. Any
step ahead must be inclusive, transparent and allow for meaningful participation of all of those
stakeholders.

Israeli bombardments on Gaza are becoming more frequent, thanks to innovations in artificial
intelligence (AI) and a military that bends to the dictates of increasingly right-wing governments. The
army boasts that intelligence units can now pinpoint targets — a process that used to take years — in
just a month. Even as the death tolls across the occupied territories climb, visions of this
humanitarian crisis rarely puncture the Jewish-Israeli public sphere, fortified by military censors,
missile defense systems, and plain indifference. Instead, regional violence is parsed through the
redemptive parlance of technological innovation.

In the days after a ceasefire is agreed, generals make their media rounds to talk up innovations in
automation unveiled in the last assault. Swarms of killer drones directed by supercomputing
algorithms, which can shoot and kill with minimal human intervention, are celebrated the same way
Silicon Valley CEOs praise chatbots. As the world reckons with runaway developments in AI, each war
waged by Israel’s automated military arsenal in Gaza illustrates the human cost of these systems.

War has always been an occasion for militaries to trade in weaponry. But as Israel’s asymmetrical
bombardments on Gaza have become annual events, the army has started branding itself as
something of a pioneer, exploring the uncharted territory of automated warfare. The IDF proclaimed
it waged the “world’s first AI war” in 2021 — the 11-day offensive on Gaza codenamed “Operation
Guardian of the Walls” which, according to B’tselem, killed 261 and injured 2,200 Palestinians.
Drones wiped out entire families, damaged schools and medical clinics, and exploded high-rise
buildings home to families, businesses, and media offices far from any military targets.

As 72,000 Palestinians were displaced and thousands more mourned the dead, Israeli
generals boasted that they had revolutionized warfare. “AI was a force multiplier for the IDF,” officials
bragged, detailing how robotic drone swarms amassed surveillance data, pinpointed targets, and
dropped bombs with minimal human intervention.

The pattern repeated a little over a year later. In August 2022, the IDF launched a five-day offensive
on Gaza named “Operation Breaking Dawn,” which took the lives of 49 Palestinians, including 17
civilians. Missiles exploded in the streets of the Jabalia refugee camp, killing seven civilians driven
from their homes because of power outages. Drones also struck a nearby cemetery, taking the lives
of children playing in a coveted strip of open space.

In the wake of the destruction, the army launched another manicured PR campaign, breaking a
decades-long ban on openly discussing the use of AI-powered drones in military operations. Brig.
Gen. Omri Dor, commander of the Palmachim airbase, told the Times of Israel that drones equipped
with AI gave the army “surgical precision” in the assault, allowing troops to minimize “collateral
damage or harm to another person.”

Like all self-promotion, however, such announcements are an exercise in self-aggrandizement. For
starters, Israel did not wage the world’s “first AI war” in 2021. Drones, missile defense systems, and
cyberwarfare have been used for decades worldwide, and the United States, rather than the Israeli
army, is often hailed as the real pioneer.

It was therefore a problem of market saturation that motivated Israel’s army to turn assaults on Gaza
into coordinated advertising campaigns. In 2021, AI experts sounded the alarm over Turkish-
manufactured killer drones that could swarm and kill targets without human intervention. China
came under fire for exporting automated weapons systems — from robotic submarines to stealth
drones — to Pakistan and Saudi Arabia

he ubiquity of AI warfare does not mean this technology should be deployed without safeguards and
limitations. Algorithms may indeed make many aspects of warfare more efficient, from guiding
missiles to sifting through data to monitoring border crossings. Yet experts list a litany of dangers
posed by these systems: from digital dehumanization that reduces human beings into lines of code
for a machine to determine who should live or die, to a lowered cost and threshold for warfare that
replaces ground troops with algorithms. Much of the weaponry on the market is riddled with
glitches, said to misidentify targets or pre-programmed to kill certain demographic groups with more
frequency. Even if they reduce the number of civilians killed in a single bombardment, as
their advocates claim, automated weapons systems risk making battle more frequent and easier to
sustain, allowing warfare to drag on with no end in sight.

Since 2021, when Israel began publicly promoting the use of AI in military operations, over 300
Palestinians have been killed in Israel’s annual assaults and thousands more have been injured and
displaced; vital infrastructure like sewage systems and electricity grids have been irrevocably
damaged in the regular assaults. Automation may have prevented Israel from sending in ground
troops and causing loss of life on its side — if it could muster the forces and political support — but
mostly, the technology has simply made the bombs and bullets fall more often.

Political pundits often discuss the dangers posed by automated weapon systems in the future tense.
But the human cost is already present across Palestine. “We have long witnessed evidence of Israel’s
use of the OPT, especially Gaza, as a laboratory for testing and deploying experimental weapons
technologies,” Omar Shakir, Israel and Palestine Director for Human Rights Watch, told +972.

Shakir emphasized that such weapons used across the West Bank and Gaza, from drones to
biometrics to AI-powered gun turrets, “serve to automate Israel’s unlawful use of force and its
apartheid against Palestinians.” Given Israel’s centrality in global weapons markets, Shakir believes
that “it is only a matter of time before the weapon systems deployed today by Israel end up in the
farthest-flung corners of the globe.”

[Link]
commit-domicide-gaza-call

The Gospel is actually one of several AI programs being used by Israeli intelligence, according to Tal
Mimran, a lecturer at Hebrew University in Jerusalem who has worked for the Israeli government on
targeting during previous military operations. Other AI systems aggregate vast quantities of
intelligence data and classify it. The final system is the Gospel, which makes a targeting
recommendation to a human analyst. Those targets could be anything from individual fighters, to
equipment like rocket launchers, or facilities such as Hamas command posts.

Israel's latest military operation in Gaza began in response to the October 7 attack that killed roughly
1,200 people, according to the Israeli government. The military says that it is trying to eliminate the
threat from Hamas and rescue hostages. It says Hamas has complicated the fight by using civilians as
human shields and operating in tunnels under civilian areas.

The post states that the targeting division is able to send these targets to the air force and navy, and
directly to ground forces via an app known as "Pillar of Fire," which commanders carry on military-
issued smartphones and other devices.

While Israel's use of the Gospel to generate a full set of targets may be unique, the nation is hardly
alone in using AI to assist in intelligence analysis. The U.S. is actively working with many different
kinds of AI to try and identify targets in the field. One suite of AI tools, known as Project Maven, is
run through the National Geospatial-Intelligence Agency, which collects massive quantities of
satellite imagery — far more than a human analyst could search.
intelligence personnel focused on creating double "maps" for the Gaza Strip: the above ground and
the underground. In Unit 9900, we anchored the relevant areas using artificial intelligence
technologies - applications that enable visual analysis through drones, photographic forays and
dedicated sensors. In real time and over the years, follow the changes in the field, including - any
progress in the Hamas tunnel plan.

During the operation, this technology was also used to identify TMS targets (launching positions,
MTLs, etc.) and destroy them in real time. Already in the first days of fighting, hundreds of such
targets were destroyed, half of them thanks to artificial intelligence, and the work methods that
were perfected on the fly.

After planning the attack, the intelligence personnel provided a relevant picture in real time and
carried out the massive countermeasures with the forces in the field, partly with the help of artificial
intelligence systems developed by 8200:

 "The Alchemist", a system that provides a dynamic and updated visual image of the border,
directly to the millimeter in the field - which receives it directly to a tablet. In many cases,
according to the wing, the system led to thwarting attempts to hit the rear.

 "The Gospel", a system that provides target researchers in a command or division with
recommendations for quality targets during combat.

"Cracking the 'Metro', our ability to map the underground in order to take away from the terrorist
organization its central dimension is a strategic change," says a senior official at AMN, "years of work,
thinking outside the box and the fusion of all the strength of the intelligence division, together with
the factors in the field, led to the break-in and the solution of the underground mystery."

"Hamas saw the tunnels as the ultimate means of survival and movement for its operatives,"
declares Major Y., head of the underground section in the research division, "in their view, all the
advantages of the IDF are offset by the underground. In the damage they suffered in the metro, they
saw how much the intelligence capabilities And the IDF's attack capabilities are high-quality and deep
in this arena as well."

As said, the operation brought to the front a unique and developing type of intelligence - the most
advanced artificial intelligence capabilities. Innovative applications, digital attack cells, drones, alerts,
connection between 8200 capabilities and the Shin Bet and a host of other developments.

"For the first time, artificial intelligence was a central component and force multiplier in the fight
against the enemy," reveals a senior officer in the wing. All of these were made possible on the basis
of the "Information and Knowledge Center", an advanced technological platform for artificial
intelligence, which includes 90% of all IDF information on the enemy in one place.

During the operation, for the first time, a multidisciplinary and multi-armed base of military units
operated with the Air Force, which produced targets in real time. Besides the attack targets known in
advance, the base discovered about 200 quality targets during the operation, such as missile launch
pits aimed at Tel Aviv and Jerusalem. 50% of the targets that were exposed were attacked during the
operation.

"We are coming out of the campaign with Hamas weakened and damaged, it will take them many
years to improve their rocket arrays," explains Captain E., head of the targets section in the
Palestinian arena of the research division, "among the most notable achievements - the elimination
of the two senior Hamas engineers. We damaged their strengthening system in an extraordinary
way."
[Link]

You might also like