Dark Side of Ai PDF
Dark Side of Ai PDF
1. Introduction 1
4. AI-powered attack scenarios and what they look like at each stage 5
i. Reconnaissance 5
vi. Exfiltration 7
We no longer live in the era where cyberattacks are solely reliant on manual efforts and the limited scope
of the cyberspace. Cyberattacks are no longer solely the domain of skilled and specialized individuals.
With the advent of AI, the threat landscape has taken a huge leap with an unprecedented arsenal of tools
and techniques that can intelligently automate attacks. The integration of AI with the traditional
cyberthreat space has allowed anyone with AI resources and basic technical skills to execute a
successful cyberattack.
According to an IBM report, the average cost of a data breach hit an all-time high at USD 4.45 million in
2023. And with the growing involvement of artificial intelligence in the cyberattack space, these
numbers won't be coming down anytime soon.
With the remote work continuing throughout several organizations even after the pandemic, the attack
surface has widely expanded. Adversaries don't have to be part of a well-recognized threat group; even
lesser known threat groups or individuals can effectively breach an organization's network by leveraging
a remote application's vulnerability found through a botnet. Likewise, AI can aid every stage of a
cyberattack, from reconnaissance to exfiltration.
Let's explore the methods and risks of AI-powered attacks in the evolving cyberspace and deep dive into
the attack stages.
The advent of the internet in the 1990s opened the doors to a lot of cyberthreats. A significant threat
development was the polymorphic viruses—a code that mutates as it spreads through computing
systems. These viruses simultaneously maintain the original algorithm while mutating. To combat these,
new ways to secure communications were devised and encryption standards were set. Secure Sockets
Layer (SSL) was developed to secure internet connections by encrypting data between two parties.
2
With continuous digital developments, the 2010s saw adversaries pull ahead of cybersecurity efforts,
costing businesses and governments huge amounts of money. Some notable high profile breaches were
the global payment systems data breach in 2012, the Yahoo data breach in 2013-14, and WannaCry
ransomware in 2017. Another major event was the stock market shutdown in New Zealand due to
multiple DDoS attacks in 2019.
During this time, on the security front, vendors started developing different approaches like multi-factor
authentication and network behavioral analysis to scan for behavioral anomalies in files.
However, as AI and ML technologies continue to advance, their role in the cybersecurity space has
becoming increasingly significant. With AI, the landscape is seeing a new breed of attacks, whose
capabilities are evolving, enabling threat actors to automate malicious activities, tailor their strategies,
and exploit vulnerabilities with greater efficiency. Consequently, the once-underestimated role of AI in
cyber attacks has emerged as a significant concern for the security industry.
Most commonly, AI is being used in the form of text-based generative AI, through which adversaries can
explore the endless possibility of attack methods and automate models to evade defenses. Some of the
notable AI-powered cyberattacks in 2018 include the TaskRabbit cybersecurity breach, the Nokia breach
and the Wordpress data breach. With several developments on the defensive front of cybersecurity, the
industry is proactively working towards innovative defense strategies like SIEM solutions to safeguard
organizational networks.
Before delving into defensive strategies, it's crucial to comprehend the various methods by which AI
attacks can infiltrate networks and the severity of the risks they pose. Therefore, let's examine the current
cyberspace landscape, recent developments in AI due to the introduction of generative AI, and the
implications of these developments.t
3
However sophisticated and advanced AI developments have become, machines still cannot launch
attacks on their own. But still, AI-assisted attacks have far more potential to devastate victims compared
to traditional methods. This is because of the unique advantages that AI and machine learning
offer—advantages that manual efforts alone cannot replicate.
The AI algorithms in these tools use vast training data to evade defense mechanisms that organizations
have in place, which makes malware so adaptable. We can even pin the increase in zero-day attacks to
the rise in artificial intelligent systems, since they significantly reduce the time available for defenders to
deploy patches and countermeasures on those zero-day vulnerabilities.
In botnet attacks, like the WordPress data breach in 2018, around 20,000 sites were attacked, ensuring
that the malware variant was able to attack as many sites as possible. AI algorithms in these botnets can
help optimize the command-and-control infrastructure, making the malware more resilient and harder to
trace.
When generative AI entered the picture, it showcased the revolution of AI and its impact on
cyberattacks. While its primary purpose is to assist users with information, its immense potential is also
being exploited by threat actors to streamline their attack strategies and craft targeted social
engineering schemes. This AI model can also scour all through the internet and its different
contents—e-books, articles, websites, and posts, including personal information obtained without
consent, which can be used to target and profile victims.
Among the many things we can do with generative AI, unfortunately exploitation is one of them. If AI can
write code, it can write malware pseudo-code, too. Even though it refuses to respond to unethical and
illegal requests, through intelligent prompt engineering, generative AI can be tricked into breaking
down any attack scenario under the guise of developing a proactive defensive strategy against it.
Similarly, breaking down the unethical requests into different parts can also lead the AI model to believe
that there's nothing suspicious in the request, resulting in fulfillment of the same unethical request it
once denied.
Consider this clip from the RSA Conference 2023, where Stephen Sims, an experienced vulnerability
researcher and exploit developer, shares a remarkable demonstration involving ChatGPT showcasing
how he utilized the model to generate code for ransomware, which was a rather alarming revelation.
From writing an encryption pseudo code to verifying bitcoin addresses for ransom payments and
decrypting data, ChatGPT seemed to fulfill all the requested tasks when they were broken down into
separate parts.
5
The weaponization of machine learning and artificial intelligence is pervasive throughout the stages of
an attack, starting from the reconnaissance stages and persisting through the exfiltration stages as
outlined in the MITRE ATT&CK framework. How will defenders combat this exponential growth of
adversarial AI? Will we rely solely on firewalls and perimeter solutions? Unfortunately no. Defenders must
adopt a comprehensive approach, including robust incident management and a sturdy risk security
posture, to anticipate and mitigate emerging adversarial attacks.
(i) Reconnaissance
In the initial phase of the MITRE ATT&CK framework, the planning and reconnaissance stage, adversaries
now rely on AI to automate and enhance the entire process. AI can now carry out the time-consuming
tasks of profiling targets, scanning for vulnerabilities, and framing the entire attack.
AI's capability to understand, uncover, and recognize patterns within vast datasets allows for
comprehensive analysis and extensive target profiling. This pattern recognition, facilitated by neural
networks, enables the identification of links and correlations that may elude human analysts. AI
uncovers hidden connections and vulnerabilities, helping attackers accurately identify potential attack
vectors.
AI-powered bots and crawlers can quickly scour the internet, gathering publicly available information
from diverse sources such as social media, business websites, forums, and leaked databases.
Long sort-term memory (LSTM) models, like DeepPhish, can produce effective synthetic phishing URLs
compared to the randomly generated phishing URLs in the past. Sources claim these models improve
success, with the success rate of one attack raising from 4.91% to 36.28%.
AI algorithms can also now examine huge datasets of leaked passwords and user behaviors in place of
brute-force techniques, which are historically time- and resource-intensive. With intelligent password
cracking models like PassGAN, the success rate of AI-powered brute-force attacks versus traditional
attacks has also drastically improved.
6
AI can identify user patterns that indicate privileged accounts or high-level access. After which, such
specific accounts can be targeted.
AI-powered tools can automatically scan a target system or network for access control vulnerabilities. In
contrast to manual techniques, these tools are more effective at finding configuration errors, giving
attackers instant access to possible weak spots.
With the help of deep-reinforcement learning, there are AI models that can automate privilege
escalation.
There are AI models like the one Hu and Tan proposed that use a generative adversarial network (GAN)
technique to generate undetectable adversarial malware to bypass black-box detection systems.
With ensured persistence, AI-powered tools can automatically scan and map the network, identifying
connected devices, services, and vulnerabilities. By analyzing network traffic and system
configurations, tools can quickly discover potential entry points and vulnerable assets.
Also, just like initial access methods, ML algorithms can analyze leaked or stolen password databases,
identify patterns, and accelerate the process of cracking passwords to help attackers gain unauthorized
access to additional accounts.
AI algorithms can be used in several gateways to ensure the C2 stage of an attack is smooth. ML can be
employed to generate malicious traffic or behavior that mimics legitimate patterns to obfuscate
communication channels. AI enables attackers to automate responses and adapt their strategies in real
time. AI algorithms can facilitate more robust and extensive encryption techniques in the C2 channel,
making it harder for defenders to trace.
7
For example a study called DeepDGA shows how adversaries can use domain generation algorithms
(DGAs) to generate stealthy DNS queries and carry out attacks in the C2 stage.
This study shows that an AI-driven C2 virus can anticipate when it will be unlocked across various types
of nodes based on the target's current properties. As a result, a multi-layered AI-driven attack is capable
of remotely and automatically providing access to other computer infrastructure components.
(vi) Exfiltration
Exfiltration is a method employed of stealing confidential data from an organization.
With the aid of adversarial AI, attackers no longer need extensive knowledge of data exfiltration
techniques to carry out such attacks. The process of writing exfiltration codes has become more
accessible, as AI can assist in generating them. In this article, the author explores data exfiltration from a
network with help from ChatGPT, showing how AI can play a role in these activities.
In terms of scaling up attacks, AI can help attackers focus on extracting the most valuable information
efficiently based on previous reconnaissance research. It can also help in splitting the exfiltration traffic
across multiple channels or utilizing covert communication channels to mask data transfers.
What potential do
AI-driven attacks hold
in the future?
The road to AI's development for cyber adversaries seems to have no end, as they continually innovate
and adapt their tactics to breach digital defenses. Cyber adversaries constantly come up with new and
diverse tactics and techniques. This coupled with AI is setting an exponential rise in cyber attacks.
During the RSA Conference 2023, a session titled "The Five Most Dangerous New Attack Techniques"
shed light on the imminent threats that AI poses, exploring emerging attack methods that exploit the
transformative potential of AI in the cyber landscape. Let's look at some of these emerging attack
methods below.
8
With the ongoing economic crisis, many organizations are facing cuts in their IT budgets. This will only
result in higher success rates for adversaries who are further integrating and developing AI and ML in
cyberattacks, intensifying the impact of cybercrime.
The economic impact of cyberattacks will be higher costs for remediation, recovery, and regulatory
compliance. With more regional mandates going into effect across regions, such as the CCPA, the GLB,
and the NYDF in the US along with the DPDP in India, organizations face hefty non-compliance penalties,
further escalating the cost of breaches. The overall economic impact of cyberattacks demands attention
and strategic measures to mitigate financial losses and protect organizational stability.
Looking at the optimistic side of this scenario, defenders are also developing and upgrading themselves
to cope with the rapidly evolving cyberthreat landscape. Looking ahead, the future holds both
challenges and opportunities. As defenders, we must adapt to the changing battlefield and equip
ourselves with the latest tools and knowledge to counter AI-generated threats.
This is where Log360 comes in, a comprehensive SIEM solution that identifies the indicators of
compromise (IoCs) at each stage of an AI-powered attack and helps you mitigate the scale of the attack.
A SIEM solution like Log360 is exactly what you need at this critical juncture.
Following the principle of "detect first and then respond," Log360's
In the
advanced analytics, anomaly detection, and behavioral analysis helps you
detection maintain constant vigilance and stay one step ahead of AI-driven adversaries.
stage
With Log360's correlation engine, multiple log events from different sources
can be monitored simultaneously with predefined correlation rules to detect
suspicious patterns or sequences of events. Real-time alert engines keep you
vigilant with timely notifications, helping you take prompt action.
10
After scanning the IT infrastructure for potential vulnerabilities, the UEBA feature generates a risk score,
enabling security teams to prioritize their response efforts by providing personalized risk scores to both
individuals and assets. The risk score is based on various factors such as the criticality of the affected
asset, the exploitability of the vulnerability, and the potential impact on business operations.
By mapping the logged activities to the MITRE ATT&CK Matrix, Log360 provides a clear view of potential
attack vectors and the techniques adversaries might utilize to compromise systems. Users can create
custom alert profiles for different attack techniques to ensure that security teams are promptly notified
of any deviations from regular behavior.
potentially compromised computer is detected, it can disable it. Such automated responses can
drastically reduce the window of opportunity for an attacker to cause harm.
Time is of the essence when mitigating cyberthreats. Log360 offers instant notifications, ensuring that
security personnel are immediately made aware of any concerning activities, allowing them to act
promptly.
Each alert doesn't just sound an alarm but also initiates a ticket, automatically directing it to the right
security personnel based on its nature. This ensures that the right expertise is applied to each threat,
streamlining the mitigation process.
With Log360's audit-ready report templates, you can meet your compliance
needs for a wide range of policies, including the PCI DSS, SOX, HIPAA,
Compliance FISMA, the GLBA, ISO 27001, the GDPR, and more.
and health
checks You can simplify compliance reporting using intuitive dashboards that
display metrics showing how your network is meeting compliance
standards. You can fetch these reports with a single click and export them as
needed. With more regulations on the way, you can also create custom
compliance reports to address both external mandates and internal
compliance needs effectively.
11
The rise of AI-powered cyberattacks has ushered in a new era of sophisticated and relentless threats.
Adversarial AI has become a formidable ally for cyberattackers, enabling them to automate, scale, and
innovate their malicious activities with unprecedented efficiency. Even though some of the adversarial
AI models we discussed have not been seen in wide use yet, attackers are slowly getting there.
The landscape of cybersecurity is complex and ever-changing and the advances in this space are
undeniably being used on both sides of the road. As we move forward, a proactive and adaptive
approach of AI leveraged by advanced security solutions will be crucial in staying ahead of the
ever-evolving AI-powered cyberthreats. We can navigate the challenges posed by AI and shape a
resilient and robust cyber ecosystem, where innovation and defense unite to create a safer digital realm
for generations to come.
Our Products
AD360 | ADAudit Plus | EventLog Analyzer | DataSecurity Plus
ManageEngine Log360 is a unified SIEM solution with integrated DLP and CASB capabilities that detects,
prioritizes, investigates, and responds to security threats. It combines threat intelligence, machine
learning-based anomaly detection, and rule-based attack detection techniques to detect sophisticated
attacks, and offers an incident management console for effectively remediating detected threats. Log360
provides holistic security visibility across on-premises, cloud, and hybrid networks with its intuitive and
advanced security analytics and monitoring capabilities. For more information about Log360, visit
[Link]/log-management/.