Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, ArXiv
…
188 pages
1 file
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is"The Abuse and Misogynoir Playbook,"written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood...
ArXiv, 2020
The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Cho...
ArXiv, 2021
The 4th edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness&Justice, Humans&Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled"AI and the Face: A Historian's View."In it, Higgs examines the unscientif...
ArXiv, 2020
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and dev...
Cornell University - arXiv, 2021
Go Wide: Article Summaries (summarized by Abhishek Gupta) Ethical AI isn't the same as trustworthy AI, and that matters (Original VentureBeat article by Kimberly Nevala) Google showed us the danger of letting corporations lead AI research (Original QZ article by Nicolás Rivero) If not AI ethicists like Timnit Gebru, who will hold Big Tech accountable? (Original Brookings article by Alex Engler) AI research survey finds machine learning needs a culture change (Original VentureBeat article by Khari Johnson)
2023
Artificial Intelligence (AI) is a rapidly advancing technology that permeates human life at various levels. It evokes hopes for a better, easier, and more exciting life, while also instilling fears about the future without humans. AI has become part of our daily lives, supporting fields such as medicine, customer service, finance, and justice systems; providing entertainment, and driving innovation across diverse fields of knowledge. Some even argue that we have entered the “AI era.” However, AI is not solely a matter of technological progress. We already witness its positive and negative impact on individuals and societies. Hence, it is crucial to examine the primary challenges posed by AI, which is the subject of AI ethics. In this paper, I present the key challenges that emerged in the literature and require ethical reflection. These include the issues of data privacy and security, the problem of AI biases resulting from social, technical, or socio-technical factors, and the challenges associated with using AI for prediction of human behavior (particularly in the context of the justice system). I also discuss existing approaches to AI ethics within the framework of technological regulations and policymaking, presenting concrete ways in which ethics can be implemented in practice. Drawing on the functioning of other scientific and technological fields, such as gene editing, the development of automobile and aviation industries, I highlight the lessons we can learn from how they function to later apply it to how AI is introduced in societies. In the final part of the paper, I analyze two case studies to illustrate the ethical challenges related to recruitment algorithms and risk assessment tools in the criminal justice system. The objective of this work is to contribute to the sustainable development of AI by promoting human-centered, societal, and ethical approaches to its advancement. Such approach seeks to maximize the benefits derived from AI while simultaneously mitigating its diverse negative consequences.
ResearchGate, 2025
Artificial Intelligence (AI) is transforming modern society, offering significant advancements while raising profound ethical concerns. This paper examines key ethical issues, including algorithmic bias, privacy violations, accountability in autonomous systems, and economic disruptions due to automation. By analysing existing literature, case studies, and regulatory frameworks, we highlight critical risks such as algorithmic discrimination, data exploitation, and the socioeconomic impact of AI-driven job displacement. Furthermore, global regulatory efforts, including the European Union's AI Act, the UK's AI Strategy, and the fragmented policies in the United States, are assessed. The study argues that mitigating AI's ethical risks requires transparent algorithms, interdisciplinary governance approaches, and proactive policy interventions. Future considerations include AI's role in warfare, misinformation, environmental sustainability, healthcare, and human rights.
2022 ACM Conference on Fairness, Accountability, and Transparency
How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current landmark work that have been shaping the field and categorize these works under three broad umbrellas: (i) those grounded in Western canonical philosophy, (ii) mathematical and statistical methods, and (iii) those emerging from critical data/algorithm/information studies. We also survey the field and explore emerging trends by examining the rapidly growing body of literature that falls under the broad umbrella of AI Ethics. To that end, we read and annotated peer-reviewed papers published over the past four years in two premier conferences: FAccT and AIES. We organize the literature based on an annotation scheme we developed according to three main dimensions: whether the paper deals with concrete applications, use-cases, and/or people's lived experience; to what extent it addresses harmed, threatened, or otherwise marginalized groups; and if so, whether it explicitly names such groups. We note that although the goals of the majority of FAccT and AIES papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow. Taken together, our conceptual analysis and the data from annotated papers indicate that the field would benefit from an increased focus This work is licensed under a Creative Commons Attribution International 4.0 License.
AI and Ethics
Bias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We ana...
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2
AI & Ethics, 2022
As the awareness of AI's power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be "operationalized," the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal on Recent and Innovation Trends in Computing and Communication, 2024
Zenodo (CERN European Organization for Nuclear Research), 2023
Cornell University - arXiv, 2022
2022 ACM Conference on Fairness, Accountability, and Transparency
International Journal of Research in Engineering and Applied Sciences(IJREAS), 2021
International Journal of Science and Research Archive, 2024
Cornell University - arXiv, 2022