Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, Surveillance & Society
https://doi.org/10.24908/ss.v21i3.16086…
5 pages
1 file
The shift to novel forms of artificial intelligence (such as machine learning) has also marked the shift to industrial-scale surveillance practices, due to these AI tools being extremely data-hungry. This piece examines the linkage between the two phenomena and charts the ethical consequences. It calls for a much more measured way of weighing the benefits of the development of AI tools, against their cost in the form of proliferation of both the means and products of surveillance.
2020
Artificial intelligence applications used by law enforcement agencies are the principal element of investigation in this paper. A brief presentation and description of the various tools based on artificial intelligence, depending on their scope, is attempted, while at the same time the obvious and not that obvious implications of the adoption of such methods are discussed, namely the setbacks created by the so-called algorithmic bias, the risks on fundamental human rights involved in mass surveillance and privacy and data protection issues that arise from the handling of AI applications by individuals active in law enforcement. The article also discusses the potential solution to such concerns, which would be the adoption of a set of rules and measures on ethical and legal governance ,and at the same time, it attempts to offer some guidance on the implementation of regulatory provisions that would help establish a sense of trust and security for individuals that would otherwise ques...
04-01, 2021
Artificial intelligence applications used by law enforcement agencies are the principal element of investigation in this paper. A brief presentation and description of the various tools based on artificial intelligence, depending on their scope, is attempted, while at the same time the obvious and not that obvious implications of the adoption of such methods are discussed, namely the setbacks created by the so-called algorithmic bias, the risks on fundamental human rights involved in mass surveillance and privacy and data protection issues that arise from the handling of AI applications by individuals active in law enforcement. The article also discusses the potential solution to such concerns, which would be the adoption of a set of rules and measures on ethical and legal governance ,and at the same time, it attempts to offer some guidance on the implementation of regulatory provisions that would help establish a sense of trust and security for individuals that would otherwise question the expediency of the wider use of AI applications by government bodies involved in law enforcement.
Information Polity
Emerging ICTs form a fundamental component of the new generation of security and surveillance technologies, in that they allow for the collection, analysis and interpretation of large quantities of data, in unprecedented and previously unforeseen ways. In the context of this Special Issue, emergent ICTs relate to a wide range of technological tools and applications that are an amalgamation of enhanced capabilities to generate and process data, including by new sensory devices embedded in the Internet of Things (IoT), and rapid advances in data science that allow for the utilisation of artificial intelligence (AI) for enhanced biometrics, interpretation of emotions and predictive policing, among other purposes. The digitisation of everyday life is increasingly blurring the boundaries between the use of ICTs to provide everyday services and their use for surveillance and security purposes. The enormous amounts of data generated, accompanied by enhanced analytical capability, create not only a desire to use data for commercial purposes, but also complementary temptations to exploit them in the context of security. Revelations about mass surveillance programmes in a number of countries and the apparent lack of democratic oversight point to the overwhelming temptation to use data in this way, arguably to the detriment of individual autonomy, dignity and human rights in general. Delivering security in a digitised world is complex, involving traditional and new security concerns, pressure from commercial interests, democratic and political control issues, intricate unaccountable data flows, as well as new digital ethical issues around transparency, accountability, fairness and trust. The pervasiveness of ICTs and the dependence of modern societies on the uninterrupted availability of ICT infrastructures and services have made ICTs themselves a core security concern. This relates to the security of critical infrastructure and cybersecurity in general, as well as the market dominance of a few big commercial interests that-it is argued-threaten the autonomy, liberty and privacy of individuals and the (digital) sovereignty of nations, whatever that may mean. New ICTs have become deeply engrained in all facets of society, including contemporary democratic and public-policy processes. Public policy is increasingly reliant on core technological platforms and data flows, suggesting a shift in power from political to commercial interests who benefit from the monetisation of data analytics. ICTs can be seen to play a critical role in politics and public policy, for example as tools to influence elections through the distribution of 'fake news' or where governments seek to limit freedom of expression and information by automatic censorship. Moreover, the rise of populist governments and political instability weakens regulatory oversight and opens up spaces for the use of ICT in potentially unethical ways. This Special Issue explores ethical and legal challenges of existing and emerging ICTs used in the context of security and surveillance from the vantage point of several disciplines and interpretive paradigms. The contributions discuss issues and gaps existing in current regulatory frameworks and planned policy measures designed to address the challenges associated with the promotion of digital technologies in society. They address the need to develop ethically compliant practices and data processes. Individual papers tackle the complex intertwined relations between security, ethics and human rights; the significance of commercial interests in democratic and policy processes; and assessments of innovative new policies or practices, including those that are technology dependent, or those that seek to support human rights, democratic values and societal development. The call for papers in March, 2021 yielded 25 abstracts; 11 articles were selected for publication after the peer review and editorial selection processes. These published contributions reflect the broad range of aspects addressed in the Call for Papers. Although the boundaries between contributions are fluid, they can be categorised by focus. Regulation is discussed by several authors: thus the articles by Orru, by Gremsl and Hödl, by Nesterova and by Clarke critically discuss mainly ethical and legal aspects of current and forthcoming regulations in the European Union, while De Hert and Bouchagiar analyse the European Union's approach to facial, visual
2020
Consideration of the intersection of human and technology issues and the influences of each of these is the key issue discussed in this paper. Artificial intelligence (AI) technologies have provided many situations to contemplate with the numerous impacts on society both positive and negative. These new and innovative functions will bring with them additional questions and many overt and covert ethical considerations. Along with the rise of big data, many believe that we have exceeded our worst fears about giving over control and manipulation of our private information. This leads us to the question whether technologies are empowering us or subjecting us. Technologies are becoming more and more capable of performing tasks previously assigned to humans. In many cases, this is a good thing to eliminate routine human tasks. Even in the preliminary phases of understanding AI, there is an infinite amount of questions and concerns. With these concerns, it is imperative that we consider al...
Zenodo (CERN European Organization for Nuclear Research), 2023
Artificial Intelligence (AI) has the potential to revolutionize various aspects of our lives, but it also raises significant ethical concerns. This paper examines the impact of AI on selected human rights, such as the right to privacy and freedom from discrimination, and discusses the issues related to the codification and regulation of AI from global and regional perspectives. AI has the potential to enhance human capabilities and improve decisionmaking processes, but it also poses a threat to privacy, bias, and accountability. AI algorithms can perpetuate existing societal stereotypes and discrimination, leading to significant violations of human rights, including the right to equality and non-discrimination. Furthermore, the use of autonomous weapons and drones has raised significant ethical concerns related to human rights. These weapons can potentially cause harm to innocent civilians and violate the right to life. There are ongoing debates about the development and use of these technologies and the need for international regulations to ensure their ethical use. Additionally, with the increasing use of automation and AI in various industries, there are concerns that many jobs may become obsolete, leading to significant job loss, and violating the right to work and a dignified livelihood. The paper also highlights the need for future work in AI ethics, including the development of AI systems that are transparent, explainable, and fair. The paper concludes that while AI has the potential to significantly benefit society, its development and deployment must be guided by ethical principles to prevent its negative impact on human rights.
Zenodo (CERN European Organization for Nuclear Research), 2023
The artificial intelligence (AI) left science fiction movies to become part of the reality, present in our daily routine. We increasingly use services controlled by this technology, which start to interact more strongly with our lives. The utilization of AI-equipped tools and softwares compel society to reflect not only about the pertinence of its use, but above all on the ethical limits that must be respected so that the fundamental rights are preserved. The utilization of AI as a mean to facial identification to give access to services or even to locate fugitives from justice, as well as the use of autonomous cars and weapons, controlled by smart softwares, demands an open and straight debate about what we will (and should) allow such equipment to do, not only replacing the humans in action diagnosis, but effectively taking decisions and acting as protagonists of social reality. It's not possible that we can dispense the use of such technological means. Thus, more than ever, the social and scientific debate must be oriented towards imposing limits and reservations, not only to the development of these means. It is needed to question: are we willing to give up our intimacy and privacy in order to have a more technologically active life? In this sense, the dialogue between researchers and thinkers from different states about the subject is more and more necessary each day. Because despite the natural differences between the countries and their respective societies, the challenges are common and tend to demand shared solutions.
Journal of Democracy, 2019
In democratic societies, concern about the consequences of our growing reliance upon artificial intelligence (AI) is rising. The term AI, coined by John McCarthy in 1956, is elusive in its precise meaning but today broadly refers to machines that can go beyond their explicit programming by making choices in ways that mirror human reasoning. In other words, AI automates decisions that people used to make. 1 While AI promises many benefits, there are also risks associated with the swift advancement and adoption of the technology. Perhaps the darkest concerns relate to misuse of AI by authoritarian regimes. Even in free societies, however, and even when the intended application is for clearly good purposes, there is significant potential for unintended harms such as reduced privacy, lost accountability, and embedded bias. In digitally connected democracies, talk of what could go wrong with AI now touches on everything from massive job loss caused by automation to machines that make discriminatory hiring decisions, and even to threats posed by "killer robots." These concerns have darkened public attitudes and made this a key moment to either build or destroy public trust in AI. How did we get to this point? In the connected half of the world, the shift to the "data-driven" society has been quick and quiet-so quick and quiet that we have barely begun to come to grips with what our growing reliance on machine-made decisions in so many areas of life will mean for human agency, democratic accountability, and the enjoyment of human rights. Many governments have been formulating national AI strategies to keep from being left behind by the AI revolution, but few have been grap
Journal for the History of Knowledge
Concerns with errors, mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. However, error has received scant attention in the emerging field of ignorance studies. In this article, we analyze how errors have been mobilized in scientific and public controversies over surveillance technologies. In juxtaposing nineteenth-century debates about the errors of biometric technologies for policing and surveillance to current criticisms of facial recognition systems, we trace a transformation of error and its political life. We argue that the modern preoccupation with error and the intellectual habits inculcated to eliminate or tame it have been transformed with machine learning. Machine learning algorithms do not eliminate or tame error, but they optimize it. Therefore, despite reports by digital rights activists, civil liberties organizations, and academics highlighting algorithmic bias and error, facial recognition systems have continued to be rolled out. Drawing on a landmark legal case around facial recognition in the UK, we show how optimizing error also remakes the conditions for a critique of surveillance. This article is part of a special issue entitled "Histories of Ignorance," edited by Lukas M.
Shanlax International Journal of Arts, Science and Humanities, 2024
Artificial Intelligence (AI) is a multidisciplinary area that integrates elements from several domains. Sometimes, it is also called deep learning or machine learning. And it can simply be referred to as making machines capable of thinking like humans and acting like humans. The process of AI involves developing specific algorithms in order to solve complex tasks that are difficult for humans. is generated through such algorithms. This paper discusses how AI impacts human activities by compromising individuals' privacy and freedom. Similarly, it discusses AI regulatory mechanisms that have been taken at national and global levels. Furthermore, it provides key recommendations in order to regulate the excess interference of AI technology over human affairs.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Ethics and Information Technology, 2001
DASTAVEJ RESEARCH JOURNAL[ISSN:2348-7763], UGC CARE Group I Journal, 2024
Ethical Issues in Covert, Security and Surveillance Research, 2021
International Journal of Scientific Research and Modern Technology (IJSRMT), 2024
Society, 2021
ResearchGate, 2025
2020
Research in Ethical Issues in Organisations, 2022
Information Polity
Intelligence and National Security, 2012
AGOS, Istanbul (http://www.agos.com.tr), 2019
International Journal of Science and Research Archive, 2024
2017 European Intelligence and Security Informatics Conference (EISIC)