Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
ORBIT Journal
…
8 pages
1 file
Policymakers struggle to assess the ethical, legal and human rights impacts of IT systems in research, industry, and at home. At the same time, research needs to be useful for industry, academia, and society to have impact on policy. Right now, three European projects: PANELFIT, SHERPA and SIENNA, are working together with stakeholders to improve ethical, human rights and legal frameworks for information and communication technologies (ICT), big data analytics, artificial intelligence (AI) and robotics. Stakeholder involvement is key, and the outputs will support the European Union’s vision of Responsible Research and Innovation (RRI), as a means to foster the design of inclusive research and innovation. Here, we provide a short introduction to the projects and outline plans for collaboration with the aim to maximiseour joint policy impact.
Essex Human Rights, Big Data and Technology Project, 2018
Big data and artificial intelligence (AI) greatly affect the enjoyment of all fundamental rights and freedoms enshrined in the Universal Declaration of Human Rights (UDHR). These new technologies offer significant opportunities for the advancement of human rights across many areas of life, including by facilitating more personalised education and assisting people in later life to live a dignified life at home. At the same time, however, the use of big data and AI has the potential to undermine or to violate human rights protections. For example, the use of these technologies can affect a range of sectors and areas of life, such as education, work, social care, health and law enforcement, and can negatively impact groups in positions of vulnerability, such as refugees, asylum-seekers and older persons. The use of big data and AI can also threaten the right to equality, the prohibition of discrimination and the right to privacy. These rights can act as gatekeepers for the enjoyment of other fundamental rights and freedoms, and interferences in this regard may hinder the development of individuals’ identity and agency, potentially undermining the basis of participatory democracy. Inspired by the UDHR, this report recommends that in order to effectively respond to the potential and challenges of big data and AI, states and businesses should apply a human-rights based approach (HRBA) to existing and future applications of these technologies. An HRBA provides a common language to frame harms, offering clear parameters as to what is and is not permitted under international human rights law, both for state and non-state actors. Specific human rights principles such as accessibility, affordability, avoidance of harm, and intellectual freedom can also contribute to addressing issues of marginalisation, discrimination and the digital divide. At the heart of the development and use of big data and AI should be the right to benefit from scientific progress (Article 27 UDHR). This can help to ensure that the emergence of new technology serves societal goals.
IEEE Security & Privacy
Emerging combinations of artificial intelligence, big data, and the applications these enable are receiving significant attention concerning privacy and other ethical issues. We need a way to comprehensively understand these issues and find mechanisms of addressing them that involve stakeholders, including civil society, to ensure that these technologies' benefits outweigh their disadvantages.
Heliyon, 2023
Industry is adopting artificial intelligence (AI) at a rapid pace and a growing number of countries have declared national AI strategies. However, several spectacular AI failures have led to ethical concerns about responsibility in AI development and use, which gave rise to the emerging field of responsible AI (RAI). The field of responsible innovation (RI) has a longer history and evolved toward a framework for the entire research, development, and innovation life cycle. However, this research demonstrates that the uptake of RI by RAI has been slow. RAI has been developing independently, with three times the number of publications than RI. The objective and knowledge contribution of this research was to understand how RAI has been developing independently from RI and contribute to how RI could be leveraged toward the progression of RAI in a causal loop diagram. It is concluded that stakeholder engagement of citizens from diverse cultures across the Global North and South is a policy leverage point for moving the RI adoption by RAI toward global best practice. A role-specific recommendation for policy makers is made to deploy modes of engaging with the Global South with more urgency to avoid the risk of harming vulnerable populations. As an additional methodological contribution, this study employs a novel method, systematic science mapping, which combines systematic literature reviews with science mapping. This new method enabled the discovery of an emerging 'axis of adoption' of RI by RAI around the thematic areas of ethics, governance, stakeholder engagement, and sustainability. 828 Scopus articles were mapped for RI and 2489 articles were mapped for RAI. The research presented here is by any measure the largest systematic literature review of both fields to date and the only crossdisciplinary review from a methodological perspective.
Research Ethics
There has been considerable debate around the ethical issues raised by data-driven technologies such as artificial intelligence. Ethical principles for the field have focused on the need to ensure that such technologies are used for good rather than harm, that they enshrine principles of social justice and fairness, that they protect privacy, respect human autonomy and are open to scrutiny. While development of such principles is well advanced, there is as yet little consensus on the mechanisms appropriate for ethical governance in this field. This paper examines the prospects for the university ethics committee to undertake effective review of research conducted on data-driven technologies in the university context. Challenges identified include: the relatively narrow focus of university-based ethical review on the human subjects research process and lack of capacity to anticipate downstream impacts; the difficulties of accommodating the complex interplay of academic and commercial...
2019
This report is an inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence conducted within the Swedish Vinnova-funded project “Hallbar AI – AI Ethics and Sustainability”, led by Anna Fellander. Based on a review and mapping of reports and studies, a quantitative and bibliometric analysis, and in-depth analyses of the healt- care sector, the telecom sector, and digital platforms, the report proposes three recommendations. Sustainable AI requires: 1. a broad focus on AI governance and regulation issues, 2. promoting multi-disciplinary collaboration, and 3. building trust in AI applications and applied machine-learning, which is a matter of key importance and requires further study of the relationship between transparency and accountability. (Less)
IEEE Technology and Society Magazine, 2021
The monitoring of public opinion economic&social changes, 2021
Artificial Intelligence (AI) regulatory and other governance mechanisms have only started to emerge and consolidate. Therefore, AI regulation, legislation, frameworks, and guidelines are presently fragmented, isolated, or co-exist in an opaque space between national governments, international bodies, corporations, practitioners, think-tanks, and civil society organisations. This article proposes a research design set up to address this problem by directly collaborating with targeted actors to identify principles for AI that are trustworthy, accountable, safe, fair, non-discriminatory, and which puts human rights and the social good at the centre of its approach. It proposes 21 interlinked substudies, focusing on the ethical judgements, empirical statements, and practical guidelines, which manufacture ethicopolitical visions and AI policies across four domains: seven tech corporations, seven governments, seven civil society actors, together with the analysis of online public debates....
2018
Recent technological advances to augment human intelligence (aka Intelligence Amplification or IA) can potentially allow us to make our cities and citizenry smarter than ever. However, their corruptive and disruptive impact on health suggests the information technology (IT) industry must establish an ethical framework to ensure our future generations get the most from life. To mitigate risks, a number of organizations have introduced various codes of ethics. Despite this positive move, most codes focus on enabling public access to data and professional integrity to the exclusion of all else. While both domains are important, we argue that they do not nurture the kind of intelligences humanity needs to thrive and prosper. To address these blind spots, this paper draws on recent evidence that three human factors (chronobiology, collaboration, creativity) are vital to humanity's future, and that harnessing them will ensure our IT professionals design more life-supporting systems. The 3 "Laws" presented as Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT) aim to stimulate critical debate on the subject and nudge the sector to take practical and meaningful action.
2019
Smart Information Systems (SIS), which are a combination of big data analytics and Artificial Intelligence (AI), constitute an integral part of our lives. From Google search, Amazon’s Alexa, surgery robots, digital libraries, location-based devices, affective computing, and human machine symbiosis, almost everybody in high-income regions is affected by SIS on daily basis. Meanwhile, human rights and ethics discussions about SIS are taking place whilst the technologies are already omnipresent. The UK House of Lords, the UNESCO, the European Commission and the Pope, are only a few examples of those working on the human rights and ethics aspects of SIS.
International Journal of Science and Research Archive, 2024
In an era where artificial intelligence (AI) increasingly intersects with every facet of human life, the imperative for ethical AI has never been more pronounced. This paper delves into the complex interplay between technological advancements in AI and the overarching human values that guide societal norms. The background of the study establishes the urgency of addressing ethical challenges inherent in AI, such as privacy, bias, and accountability, within the broader context of regulatory and policy frameworks. Aiming to critically evaluate the integration and effectiveness of ethical principles in AI applications, the paper navigates through a qualitative analysis, employing theoretical frameworks to dissect the ethical dimensions of AI. The scope encompasses a diverse range of topics, including global trends in ethical AI development, the impact of AI on human rights and personal freedoms, and the analysis of bias and fairness in AI algorithms. Real-world case studies provide insights into the successes and failures of ethical AI implementation, while the role of public perception and trust in AI adoption is scrutinized. The main conclusions reveal a dynamic global landscape of ethical AI, emphasizing the need for robust ethical frameworks and proactive strategies to mitigate biases and ensure equitable outcomes. Recommendations advocate for clear ethical guidelines, integration of ethics in AI development, transparency, accountability, multi-stakeholder collaboration, public engagement, and continuous ethical evaluation. The study concludes that balancing technological innovation with ethical constraints is crucial for the responsible development of AI. It underscores the importance of ethical vigilance, ensuring AI aligns with societal values and individual rights.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Heliyon, 2022
In European Yearbook on Human Rights 2020 by Philip Czech, Lisa Heschl, Karin Lukas, Manfred Nowak and Gerd Oberleitner (eds.), 2020
Nature Machine Intelligence, 2021
Expert Systems, 2023
Communications of The ACM, 2017
International Journal of Information Management, 2022
Journal of Artificial Intelligence Research
AoIR Selected Papers of Internet Research, 2020
International law research, 2024
AI and Ethics, 2023
Frontiers in Computer Science
International Scientific Journal for Research, 2023
An Introduction to Ethics in Robotics and AI