Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, 2020 12th International Conference on Cyber Con ict 20/20 Vision: The Next Decade
…
18 pages
1 file
Malign influence campaigns leveraging cyber capabilities have caused significant political disruption in the United States and elsewhere; but the next generation of campaigns could be considerably more damaging as a result of the widespread use of machine learning. Current methods for successfully waging these campaigns depend on labour-intensive human interaction with targets. The introduction of machine learning, and potentially artificial intelligence (AI), will vastly enhance capabilities for automating the reaching of mass audiences with tailored and plausible content. Consequently, they will render malicious actors even more powerful. Tools for making use of machine learning in information operations are developing at an extraordinarily rapid pace, and are becoming rapidly more available and affordable for a much wider variety of users. Until early 2018 it was assumed that the utilisation of AI methods by cyber criminals was not to be expected soon, because those methods rely on vast datasets, correspondingly vast computational power, or both, and demanded highly specialised skills and knowledge. However, in 2019 these assumptions proved invalid, as datasets and computing power were democratised and freely available tools obviated the need for special skills. It is reasonable to assume that this process will continue, transforming the landscape of deception, disinformation and influence online. This article assesses the state of AI-enhanced cyber and information operations in late 2019 and investigates whether this may represent the beginnings of substantial and dangerous trends over the next decade.
New explorations, 2024
The objective of this article is to analyze how the disinformation industry, understood as organized and systematic practices aimed at disseminating false information with the aim of manipulating public perception, has eroded trust in information, especially with the collaboration of socio-digital networks and artificial intelligence (AI). Technological advances amplify the speed and sophistication with which disinformation spreads, making it difficult to identify and counteract false information, which could be identified with adequate digital literacy. With the use of algorithms and big data analysis, AI is used to personalize political messages, segment audiences and predict electoral trends, seeking not only to persuade voters, but also to create an immersive and emotionally attractive narrative. To do this, the article shows cases of deceiving the audience by presenting false information in a realistic way. Thanks to the formidable development of AI and the advent of synthetic humans, we are witnessing the profound transformation of the entertainment industry and, shortly, political marketing.
iaeme, 2024
The rapid advancements in artificial intelligence (AI) have revolutionized industries, enhanced productivity and enabling innovative solutions. However, these same technologies have been weaponized by cybercriminals to execute highly sophisticated scams. This paradigm shift in cybercrime introduces new threats such as voice cloning, AI-generated phishing, and deepfake scams, which exploit the power of AI to deceive and manipulate at an unprecedented scale.
Data & policy, 2021
Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms' recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem. Policy Significance Statement This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation. Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules.
21st Century Prometheus, 2020
Daily Observer, 2020
Artificial intelligence (AI) and machine learning are growing at an unprecedented speed. AI is active in many aspects of our society. It is at the heart of every internet search and every App. One of the recent advances that have made AI more interesting is machine learning. Machine learning involves the development and evaluation of algorithms that enable a computer to extract functions from a dataset. Deep learning is the subfield of AI that focuses on creating large neural network models that are capable of making accurate data-driven decisions. Many AI development can improve our lives, but some will have unintended consequences that threaten important aspects of human livesIn recent time threat from the malicious use of AI (MUAI) is getting importance. MUAI is acquiring great importance in the targeted psychological destabilization of political systems and the system of international relations. This factor sets new requirements for ensuring international psychological security (IPS).
IEEE Access, 2022
The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI.
AI's Dual Role in Driving Online Terrorist Content and Counter Strategies: Is NATO Prepared for AI-Enhanced Extremism?, 2024
Artificial intelligence (AI) is revolutionising global security and defense, particularly in the domain of online Counter Violent Extremism (CVE) efforts. The proliferation of AI tools has dramatically lowered barriers for creating and disseminating sophisticated terrorist content online, outpacing traditional detection and mitigation strategies. NATO faces a critical challenge as its existing methods for combating online radicalisation and terrorist propaganda become increasingly ineffective against AI-enhanced extremist content. The essay proposes a multidimensional strategy for NATO, marking the development of AI-driven early detection technologies, fostering international collaboration, updating policies, promoting regulation, and maintaining ongoing research and development. By adopting these measures, NATO can position itself at the forefront of AI-driven counterterrorism efforts, effectively countering those exploiting technologies for promoting disorder. The success of NATO’s counterterrorism practices in the AI age will eventually depend on its ability to adapt swiftly, collaborate effectively, and innovate continuously in response to evolving digital threats.
Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.
World Journal of Advanced Research and Reviews, 2025
The purpose of this study was to investigate how artificial intelligence (AI) influences and improves computational propaganda and misinformation efforts. The growing complexity of AI-driven technologies, like deepfakes, bots, and algorithmic manipulation, which have turned conventional propaganda strategies into more widespread and damaging media manipulation techniques, served as the researcher's inspiration. The study used a mixed-methods approach, combining quantitative data analysis from academic studies and digital forensic investigations with qualitative case studies of misinformation efforts. The results brought to light important tactics including the platform-specific use of X (formerly Twitter) to propagate false information, emotional exploitation through fear-based messaging, and purposeful amplification through bot networks. According to this research, AI technologies enhanced controversial content by taking use of algorithmic biases, so generating echo chambers and eroding confidence in democratic processes. The study also emphasized how deepfake technologies and their ability to manipulate susceptible populations' emotions present ethical and sociopolitical issues. In order to counteract AI-generated misinformation, the study suggested promoting digital literacy and creating more potent detection methods, such digital watermarking. Future studies should concentrate on the long-term psychological effects of AI-driven misinformation on democratic participation, public trust, and regulatory reactions in various countries. Furthermore, investigating how new AI technologies are influencing other media, like video games and virtual reality, may help humans better comprehend how they affect society as a whole.
International Journal of Scientific Research and Modern Technology (IJSRMT),Volume 3, Issue 11, 2024
With rapid technological advancements, the emergence of deepfakes and digital misinformation has become both a powerful tool and a formidable challenge. Deepfakes—realistic yet fabricated media generated through artificial intelligence—threaten media credibility, public perception, and democratic integrity. This study explores the intersection of AI technology with these concerns, highlighting AI's role both as a driver of innovation and as a defense mechanism. By conducting an in-depth review of literature, analyzing current technologies, and examining case studies, this research evaluates AI-based strategies for identifying and addressing misinformation. Additionally, it considers the ethical and policy implications, calling for greater transparency, accountability, and media literacy. Through examining present AI techniques and predicting future trends, this paper underscores the importance of collaborative efforts among tech companies, government agencies, and the public to uphold truth and integrity in the digital age.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The Academic – International Journal of Multidisciplinary Research (A Peer Reviewed Refereed Online Journal) ISSN: 2583-973X (Online), 2024
OSCE Gender Justice: Advancing Pluralism and Free Speech For All, 2022
Academia Letters, 2021
Machine Learning Approaches in Cyber Security Analytics, 2019
Global Strategic & Security Studies Review (GSSSR), 2023
European Conference on Social Media
International Journal of Intelligence and Counterintelligence, 2017
Palgrave Macmillan, 2023
Oxford University Press eBooks, 2021
The Democratization of Artificial Intelligence. Net Politics in the Era of Learning Algorithms, 2019
Red’shine Publications, 2023
International Journal of Advanced Trends in Computer Science and Engineering, 2021