The original version of this article unfortunately contained mistake. At the end of the reference... more The original version of this article unfortunately contained mistake. At the end of the reference list, on the last page, the added data has been removed. In footnote 11, we have enclosed parenthesis the reference (O'Shea 2018). The original article has been corrected.
There has been an increased focus within the AI ethics literature on questions of power, reflecte... more There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework of power and argue that end-users depend on a system’s developers and users, because end-users rely on these systems to satisfy their goals, constituting a power asymmetry between developers, users and end-users. I ground my analysis in the neo-republican moral wrong of domination, drawing attention to legitimacy concerns of the power-dependence relation following from the current lack of accountability mechanisms. I illustrate my claims on the ba...
Zenodo (CERN European Organization for Nuclear Research), Jul 13, 2021
The SIENNA project-Stakeholder-informed ethics for new technologies with high socioeconomic and h... more The SIENNA project-Stakeholder-informed ethics for new technologies with high socioeconomic and human rights impact-has received funding under the European Union's H2020 research and innovation programme under grant agreement No 741716.
New technologies are the source of uncertainties about the applicability of moral and morally con... more New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage in conceptual engineering (without naming it as such). We subsequently reflect on the case studies to find out how these illustrate conceptual engineering as an appropriate method to deal with pressing concerns in the philosophy of technology. We have two main goals. We first want to contribute to the literature on conceptual engineering by presenting concrete examples of conceptual engineering in the philosophy of technology. This is especially relevant, because the...
This SIENNA deliverable offers a broad ethical analysis of artificial intelligence (AI) and robot... more This SIENNA deliverable offers a broad ethical analysis of artificial intelligence (AI) and robotics technologies. Its primary aims have been to comprehensively identify and analyse the present and potential future ethical issues in relation to: (1) the AI and robotics subfields, techniques, approaches and methods; (2) their physical technological products and procedures that are designed for practical applications; and (3) the particular uses and applications of these products and procedures. In conducting the ethical analysis, we strove to provide ample clarification, details about nuances, and contextualisation of the ethical issues that were identified, while avoiding the making of moral judgments and proposing of solutions to these issues. A secondary aim of this report has been to convey the results of SIENNA's "country studies" of the national academic and popular media debate on the ethical issues in AI and robotics in twelve different EU and non-EU countries, ...
The original version of this article unfortunately contained mistake. At the end of the reference... more The original version of this article unfortunately contained mistake. At the end of the reference list, on the last page, the added data has been removed. In footnote 11, we have enclosed parenthesis the reference (O'Shea 2018). The original article has been corrected.
There has been an increased focus within the AI ethics literature on questions of power, reflecte... more There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework of power and argue that end-users depend on a system’s developers and users, because end-users rely on these systems to satisfy their goals, constituting a power asymmetry between developers, users and end-users. I ground my analysis in the neo-republican moral wrong of domination, drawing attention to legitimacy concerns of the power-dependence relation following from the current lack of accountability mechanisms. I illustrate my claims on the ba...
Zenodo (CERN European Organization for Nuclear Research), Jul 13, 2021
The SIENNA project-Stakeholder-informed ethics for new technologies with high socioeconomic and h... more The SIENNA project-Stakeholder-informed ethics for new technologies with high socioeconomic and human rights impact-has received funding under the European Union's H2020 research and innovation programme under grant agreement No 741716.
New technologies are the source of uncertainties about the applicability of moral and morally con... more New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage in conceptual engineering (without naming it as such). We subsequently reflect on the case studies to find out how these illustrate conceptual engineering as an appropriate method to deal with pressing concerns in the philosophy of technology. We have two main goals. We first want to contribute to the literature on conceptual engineering by presenting concrete examples of conceptual engineering in the philosophy of technology. This is especially relevant, because the...
This SIENNA deliverable offers a broad ethical analysis of artificial intelligence (AI) and robot... more This SIENNA deliverable offers a broad ethical analysis of artificial intelligence (AI) and robotics technologies. Its primary aims have been to comprehensively identify and analyse the present and potential future ethical issues in relation to: (1) the AI and robotics subfields, techniques, approaches and methods; (2) their physical technological products and procedures that are designed for practical applications; and (3) the particular uses and applications of these products and procedures. In conducting the ethical analysis, we strove to provide ample clarification, details about nuances, and contextualisation of the ethical issues that were identified, while avoiding the making of moral judgments and proposing of solutions to these issues. A secondary aim of this report has been to convey the results of SIENNA's "country studies" of the national academic and popular media debate on the ethical issues in AI and robotics in twelve different EU and non-EU countries, ...
Uploads
Papers by Jonne Maas