International Journal of Law and Information Technology
The European Union’s General Data Protection Regulation tasks organizations to perform a Data Pro... more The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential i...
Data protection regulations generally afford individuals certain rights over their personal data,... more Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
Data intermediaries serve as a mediator between those who wish to make their data available, and ... more Data intermediaries serve as a mediator between those who wish to make their data available, and those who seek to leverage that data. The intermediary works to govern the data in specific ways, and provides some degree of confidence regarding how the data will be used. Issue 1 This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review. Definition A data intermediary serves as a mediator between those who wish to make their data available, and those who seek to leverage that data. The intermediary works to govern the data in specific ways, and provides some degree of confidence regarding how the data will be used. Data intermediaries form part of a data processing ecosystem. This includes the intermediary, often an organisation (of some form), as well as two other key categories of stakeholder: 1 data suppliers who are those individuals, communities, or enterprises that make their data available, and third parties referring to those interested in using (processing) supplier data. Context and description The concept has emerged in the context of 'big data' , and the increasing interest in data analytics and machine learning (Hardjono & Pentland, 2019; Stalla-Bourdillon et al., 2020; Micheli et al., 2021). Deep concerns however exist regarding opaque data practices, surveillance practices, and the systemic power and information asymmetries inherent to the current data processing ecosystems (Edelman, 2018), where organisations reap the value and benefit of data and its processing, rather than the people to whom the data pertains (Zuboff, 2015; Beer, 2017; Kitchin, 2017). Data intermediaries respond by attempting to help rebalance the relationships between those producing or with rights over data, and those seeking to use that data by offering an alternative approach to the data processing. The data intermediary is a nascent, yet emerging concept, with the terminology still in flux. An intermediary's role, operation and the actions it will undertake, as well as its governance and incentive structures are very context sensitive. That is, how data intermediaries form and operate, largely depends on their purposes, the nature of suppliers and third parties they engage with, the intermediary's relationships with the suppliers and third parties involved, the data used, the means used to operate the intermediary (and whether these require a technical expertise), and so forth (see Terminologies below). 1. Note this is the terminology that we use; in this space, the terminology tends to vary.
Personal Information Management Systems (PIMS) seek to empower users by equipping them with mecha... more Personal Information Management Systems (PIMS) seek to empower users by equipping them with mechanisms for mediating, monitoring and controlling how their data is accessed, used, or shared. Issue 2 This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.
Personal Data Stores ('PDSs') entail users having a (physical or virtual) device within which the... more Personal Data Stores ('PDSs') entail users having a (physical or virtual) device within which they themselves can, in theory, capture, aggregate, and control the access to and the transfer of personal data. Their aim is to empower users in relation to their personal data, strengthening their opportunities for data protection, privacy, and/or to facilitate trade and monetisation. As PDS technologies develop, it is important to consider their role in relation to issues of data protection. The General Data Protection Regulation requires that the processing of user data be predicated on one of its defined lawful bases, whereby the Regulation does not favour any one basis over another. We explore how PDS architectures relate to these lawful bases, and observe that they tend to favour the bases that require direct user involvement. This paper considers issues that the envisaged architectural choices surrounding the lawful grounds may entail.
Driven by the promise of increased efficiencies and cost-savings, the public sector has shown muc... more Driven by the promise of increased efficiencies and cost-savings, the public sector has shown much interest in Automated Decision-Making (ADM) technologies. But the rule of law, human rights, and principles of good government – fundamental in the public sector – are given insufficient priority. This article argues that public sector ADM must recentre these considerations. While attention has been paid to the technology itself, greater focus should be on public-sector oversight, responsibility, and the broader contexts and implications of ADM’s deployment and use. We highlight some of the transparency issues that prevent effective investigation of public-sector ADM and indicate the relevant legal frameworks and their limitations. We explore ways forward, from both a regulatory and sociotechnical systems perspectives, highlighting the need for mechanisms that facilitate reviewability, to enable better governance and oversight of the adoption and use of ADM for public administration. Effectively managing the potential risks by prioritising public sector values would give confidence to the sector to leverage new technologies while maintaining public trust.
Personal information management systems (PIMS) aka personal data stores (PDSs) represent an emerg... more Personal information management systems (PIMS) aka personal data stores (PDSs) represent an emerging class of technology that seeks to empower individuals regarding their data. Presented as an alternative to current ' centralised' data processing approaches, whereby user data is (rather opaquely) collected and processed by organisations, PDSs provide users with technical mechanisms for aggregating and managing their own data, determining when and with whom their data is shared, and the computation that may occur over that data. Though arguments for decentralisation may be appealing, there are questions regarding the extent to which PDSs actually address data processing concerns. This paper explores these questions from the perspective of PDS users. Specifically, we focus on data protection, including how PDSs relate to rights and the legal bases for processing, as well as how PDSs affect the information asymmetries and surveillance practices inherent online. We show that, despite the purported benefits of PDSs, many of the systemic issues of online/data ecosystems remain.
on the protection of natural persons with regard to the processing of personal data and on the fr... more on the protection of natural persons with regard to the processing of personal data and on the free movement of personal data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L119/1. Art 4(1) GDPR defines personal data. Heleen Janssen et al. Á Decentralized data processing
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers - UbiComp '18, 2018
Data protection regulations generally afford individuals certain rights over their personal data,... more Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
Though discussions of data protection have focused on the larger, more established organisations,... more Though discussions of data protection have focused on the larger, more established organisations, startups also warrant attention. This is particularly so for tech startups, who are often innovating at the 'cuttingedge'-pushing the boundaries of technologies that typically lack established data protection bestpractices. Initial decisions taken by startups could well have long-term impacts, and their actions may inform (for better or for worse) how particular technologies and the applications they support are implemented, deployed, and perceived for years to come. Ensuring that the innovations and practices of tech startups are sound, appropriate and acceptable should therefore be a high priority. This paper explores the attitudes and preparedness of tech startups to issues of data protection. We interviewed a series of UK-based emerging tech startups as the EU's General Data Protection Regulation (GDPR) came into effect, which revealed areas in which there is a disconnect between the approaches of the startups and the nature and requirements of the GDPR. We discuss the misconceptions and associated risks facing innovative tech startups and offer a number of considerations for the firms and supervisory authorities alike. In light of our discussions, and given what is at stake, we argue that more needs to be done to help ensure that emerging technologies and the practices of the companies that operate them better align with the regulatory obligations. We conclude that tech startups warrant increased attention, This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Companies and other private institutions see great and promising profits in the use of automated ... more Companies and other private institutions see great and promising profits in the use of automated decision-making (‘ADM’) for commercial-, financial- or efficiency in work processing purposes. Meanwhile, ADM based on a data subjects’ personal data may (severely) impact its fundamental rights and freedoms. The General Data Protection Regulation (GDPR) provides for a regulatory framework that applies whenever a controller considers and deploys ADM onto individuals on the basis of their personal data. In the design stage of the intended ADM, article 35 (3)(a) obliges a controller to apply a Data Protection Impact Assessment (DPIA), part of which is an assessment of ADM’s impact on individual rights and freedoms. Article 22 GDPR determines under what conditions ADM is allowed and endows data subjects with increased protection. Research among companies of various sizes has shown that there is (legal) insecurity about the interpretation of the GDPR (including the provisions relevant to ADM). The first objective of the author is to detect ways forward by offering practical handles to execute a DPIA that includes a slidable assessment of impacts on data subjects’ fundamental rights. This assessment is based on four benchmarks that should help to assess the gravity of potential impacts, i.e. i) to determine the impact on the fundamental right(s) at stake, ii) to establish the context in which the ADM is used, iii) the establishment of who is beneficiary of the use of personal data in the ADM and iv) the establishment who is in control over the data flows in the ADM. From the benchmarks an overall fundamental rights impact assessment about ADM should arise. A second objective is to indicate potential factors and measures that a controller should consider in its risk management after the assessment. The proposed approach should help fostering fair, compliant and trustworthy ADM and contains directions for future research.
Emerging technologies permeate and potentially disrupt a wide spectrum of our social, economic, a... more Emerging technologies permeate and potentially disrupt a wide spectrum of our social, economic, and political relations. Various state institutions, including education, law enforcement, and healthcare, increasingly rely on technical components, such as automated decision-making systems, e-government systems, and other digital tools to provide cheap, efficient public services, and supposedly fair, transparent, disinterested, and accountable public administration. The increased interest in various blockchain-based solutions from central bank digital currencies, via tokenized educational credentials, and distributed ledger-based land registries to self-sovereign identities is the latest, still mostly unwritten chapter in a long history of standardized, objectified, automated, technocratic, and technologized public administration. The rapid, (often) unplanned, and uncontrolled technologization of public services (as happened in the hasty adoption of distance-learning and teleconferenci...
In the present article, the authors provide a general overview of the academic and legal debate o... more In the present article, the authors provide a general overview of the academic and legal debate on the regulation of access to and use of genetic information by non-medical actors. Their aim is to give some insight in the academic views on the need to introduce specific genetics legislation and on the balance that might be struck between the various interests concerned. Furthermore, by analyzing relevant legislation and policy measures in the US and in Europe, they identify the issues that are deemed relevant in considering and, eventually, introducing regulative measures with respect to genetic information.
Key Points Companies expect great and promising benefits from automated decision-making with pers... more Key Points Companies expect great and promising benefits from automated decision-making with personal data; however, scientific research indicates that legal uncertainty exists among private controllers with the interpretation of provisions relevant to automated decision-making under the General Data Protection Regulation (GDPR). Article 35 GDPR obliges private controllers to execute a Data Protection Impact Assessment (DPIA) prior to deploying automated decisions on humans. Assessing potential fundamental rights impacts is part of that DPIA. The objective of this article is to provide private controllers with a practical approach for a DPIA to automated decision-making to detect potential impacts on fundamental rights. The approach indicates levels of impacts and types of measures a controller should consider to achieve an appropriate risk management. The impact assessment is based on four benchmarks: (i) to identify fundamental rights potentially at risk; (ii) to identify risks occurring in their ADM systems at design stages and during operation; (iii) to balance fundamental rights risks and controller interests involved; and (iv) to establish to what extent data subjects exercise control over data processing. By responding to the benchmarks, controllers identify risk levels that indicate the type of measures that should be considered to achieve fundamental rights compliant ADM. This approach enables controllers to give account towards data subjects and supervisory authorities about envisaged risk management to potential impacts on fundamental rights. The proposed approach seeks to foster compliant, fair, and transparent automated decision-making.
The technological infrastructures enabling the collection, processing, and trading of data have f... more The technological infrastructures enabling the collection, processing, and trading of data have fuelled a rapid innovation of data governance models. We differentiate between macro, meso, and micro level models, which correspond to major political blocks; societal-, industry-, or community level systems, and individual approaches, respectively. We focus on meso-level models, which coalesce around: (1) organisations prioritising their own interests over interests of other stakeholders; (2) organisations offering technological and legal tools aiming to empower individuals; (3) community-based data intermediaries fostering collective rights and interests. In this article we assess these meso-level models, and discuss their interaction with the macro-level legal frameworks that have evolved in the US, the EU, and China. The legal landscape has largely remained inconsistent and fragmented, with enforcement struggling to keep up with the latest developments. We argue, first, that the success of meso-logics is largely defined by global economic competition, and, second, that these meso-logics may potentially put the EU's macrolevel framework with its mixed internal market and fundamental rights-oriented model under pressure. We conclude that, given the relative absence of a strong macro level-framework and an intensive competition of governance models at meso-level, it may be challenging to avoid compromises to the European macro framework. This paper is part of Governing "European values" inside data flows, a special issue of
International Journal of Law and Information Technology
The European Union’s General Data Protection Regulation tasks organizations to perform a Data Pro... more The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential i...
Data protection regulations generally afford individuals certain rights over their personal data,... more Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
Data intermediaries serve as a mediator between those who wish to make their data available, and ... more Data intermediaries serve as a mediator between those who wish to make their data available, and those who seek to leverage that data. The intermediary works to govern the data in specific ways, and provides some degree of confidence regarding how the data will be used. Issue 1 This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review. Definition A data intermediary serves as a mediator between those who wish to make their data available, and those who seek to leverage that data. The intermediary works to govern the data in specific ways, and provides some degree of confidence regarding how the data will be used. Data intermediaries form part of a data processing ecosystem. This includes the intermediary, often an organisation (of some form), as well as two other key categories of stakeholder: 1 data suppliers who are those individuals, communities, or enterprises that make their data available, and third parties referring to those interested in using (processing) supplier data. Context and description The concept has emerged in the context of 'big data' , and the increasing interest in data analytics and machine learning (Hardjono & Pentland, 2019; Stalla-Bourdillon et al., 2020; Micheli et al., 2021). Deep concerns however exist regarding opaque data practices, surveillance practices, and the systemic power and information asymmetries inherent to the current data processing ecosystems (Edelman, 2018), where organisations reap the value and benefit of data and its processing, rather than the people to whom the data pertains (Zuboff, 2015; Beer, 2017; Kitchin, 2017). Data intermediaries respond by attempting to help rebalance the relationships between those producing or with rights over data, and those seeking to use that data by offering an alternative approach to the data processing. The data intermediary is a nascent, yet emerging concept, with the terminology still in flux. An intermediary's role, operation and the actions it will undertake, as well as its governance and incentive structures are very context sensitive. That is, how data intermediaries form and operate, largely depends on their purposes, the nature of suppliers and third parties they engage with, the intermediary's relationships with the suppliers and third parties involved, the data used, the means used to operate the intermediary (and whether these require a technical expertise), and so forth (see Terminologies below). 1. Note this is the terminology that we use; in this space, the terminology tends to vary.
Personal Information Management Systems (PIMS) seek to empower users by equipping them with mecha... more Personal Information Management Systems (PIMS) seek to empower users by equipping them with mechanisms for mediating, monitoring and controlling how their data is accessed, used, or shared. Issue 2 This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.
Personal Data Stores ('PDSs') entail users having a (physical or virtual) device within which the... more Personal Data Stores ('PDSs') entail users having a (physical or virtual) device within which they themselves can, in theory, capture, aggregate, and control the access to and the transfer of personal data. Their aim is to empower users in relation to their personal data, strengthening their opportunities for data protection, privacy, and/or to facilitate trade and monetisation. As PDS technologies develop, it is important to consider their role in relation to issues of data protection. The General Data Protection Regulation requires that the processing of user data be predicated on one of its defined lawful bases, whereby the Regulation does not favour any one basis over another. We explore how PDS architectures relate to these lawful bases, and observe that they tend to favour the bases that require direct user involvement. This paper considers issues that the envisaged architectural choices surrounding the lawful grounds may entail.
Driven by the promise of increased efficiencies and cost-savings, the public sector has shown muc... more Driven by the promise of increased efficiencies and cost-savings, the public sector has shown much interest in Automated Decision-Making (ADM) technologies. But the rule of law, human rights, and principles of good government – fundamental in the public sector – are given insufficient priority. This article argues that public sector ADM must recentre these considerations. While attention has been paid to the technology itself, greater focus should be on public-sector oversight, responsibility, and the broader contexts and implications of ADM’s deployment and use. We highlight some of the transparency issues that prevent effective investigation of public-sector ADM and indicate the relevant legal frameworks and their limitations. We explore ways forward, from both a regulatory and sociotechnical systems perspectives, highlighting the need for mechanisms that facilitate reviewability, to enable better governance and oversight of the adoption and use of ADM for public administration. Effectively managing the potential risks by prioritising public sector values would give confidence to the sector to leverage new technologies while maintaining public trust.
Personal information management systems (PIMS) aka personal data stores (PDSs) represent an emerg... more Personal information management systems (PIMS) aka personal data stores (PDSs) represent an emerging class of technology that seeks to empower individuals regarding their data. Presented as an alternative to current ' centralised' data processing approaches, whereby user data is (rather opaquely) collected and processed by organisations, PDSs provide users with technical mechanisms for aggregating and managing their own data, determining when and with whom their data is shared, and the computation that may occur over that data. Though arguments for decentralisation may be appealing, there are questions regarding the extent to which PDSs actually address data processing concerns. This paper explores these questions from the perspective of PDS users. Specifically, we focus on data protection, including how PDSs relate to rights and the legal bases for processing, as well as how PDSs affect the information asymmetries and surveillance practices inherent online. We show that, despite the purported benefits of PDSs, many of the systemic issues of online/data ecosystems remain.
on the protection of natural persons with regard to the processing of personal data and on the fr... more on the protection of natural persons with regard to the processing of personal data and on the free movement of personal data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L119/1. Art 4(1) GDPR defines personal data. Heleen Janssen et al. Á Decentralized data processing
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers - UbiComp '18, 2018
Data protection regulations generally afford individuals certain rights over their personal data,... more Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
Though discussions of data protection have focused on the larger, more established organisations,... more Though discussions of data protection have focused on the larger, more established organisations, startups also warrant attention. This is particularly so for tech startups, who are often innovating at the 'cuttingedge'-pushing the boundaries of technologies that typically lack established data protection bestpractices. Initial decisions taken by startups could well have long-term impacts, and their actions may inform (for better or for worse) how particular technologies and the applications they support are implemented, deployed, and perceived for years to come. Ensuring that the innovations and practices of tech startups are sound, appropriate and acceptable should therefore be a high priority. This paper explores the attitudes and preparedness of tech startups to issues of data protection. We interviewed a series of UK-based emerging tech startups as the EU's General Data Protection Regulation (GDPR) came into effect, which revealed areas in which there is a disconnect between the approaches of the startups and the nature and requirements of the GDPR. We discuss the misconceptions and associated risks facing innovative tech startups and offer a number of considerations for the firms and supervisory authorities alike. In light of our discussions, and given what is at stake, we argue that more needs to be done to help ensure that emerging technologies and the practices of the companies that operate them better align with the regulatory obligations. We conclude that tech startups warrant increased attention, This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Companies and other private institutions see great and promising profits in the use of automated ... more Companies and other private institutions see great and promising profits in the use of automated decision-making (‘ADM’) for commercial-, financial- or efficiency in work processing purposes. Meanwhile, ADM based on a data subjects’ personal data may (severely) impact its fundamental rights and freedoms. The General Data Protection Regulation (GDPR) provides for a regulatory framework that applies whenever a controller considers and deploys ADM onto individuals on the basis of their personal data. In the design stage of the intended ADM, article 35 (3)(a) obliges a controller to apply a Data Protection Impact Assessment (DPIA), part of which is an assessment of ADM’s impact on individual rights and freedoms. Article 22 GDPR determines under what conditions ADM is allowed and endows data subjects with increased protection. Research among companies of various sizes has shown that there is (legal) insecurity about the interpretation of the GDPR (including the provisions relevant to ADM). The first objective of the author is to detect ways forward by offering practical handles to execute a DPIA that includes a slidable assessment of impacts on data subjects’ fundamental rights. This assessment is based on four benchmarks that should help to assess the gravity of potential impacts, i.e. i) to determine the impact on the fundamental right(s) at stake, ii) to establish the context in which the ADM is used, iii) the establishment of who is beneficiary of the use of personal data in the ADM and iv) the establishment who is in control over the data flows in the ADM. From the benchmarks an overall fundamental rights impact assessment about ADM should arise. A second objective is to indicate potential factors and measures that a controller should consider in its risk management after the assessment. The proposed approach should help fostering fair, compliant and trustworthy ADM and contains directions for future research.
Emerging technologies permeate and potentially disrupt a wide spectrum of our social, economic, a... more Emerging technologies permeate and potentially disrupt a wide spectrum of our social, economic, and political relations. Various state institutions, including education, law enforcement, and healthcare, increasingly rely on technical components, such as automated decision-making systems, e-government systems, and other digital tools to provide cheap, efficient public services, and supposedly fair, transparent, disinterested, and accountable public administration. The increased interest in various blockchain-based solutions from central bank digital currencies, via tokenized educational credentials, and distributed ledger-based land registries to self-sovereign identities is the latest, still mostly unwritten chapter in a long history of standardized, objectified, automated, technocratic, and technologized public administration. The rapid, (often) unplanned, and uncontrolled technologization of public services (as happened in the hasty adoption of distance-learning and teleconferenci...
In the present article, the authors provide a general overview of the academic and legal debate o... more In the present article, the authors provide a general overview of the academic and legal debate on the regulation of access to and use of genetic information by non-medical actors. Their aim is to give some insight in the academic views on the need to introduce specific genetics legislation and on the balance that might be struck between the various interests concerned. Furthermore, by analyzing relevant legislation and policy measures in the US and in Europe, they identify the issues that are deemed relevant in considering and, eventually, introducing regulative measures with respect to genetic information.
Key Points Companies expect great and promising benefits from automated decision-making with pers... more Key Points Companies expect great and promising benefits from automated decision-making with personal data; however, scientific research indicates that legal uncertainty exists among private controllers with the interpretation of provisions relevant to automated decision-making under the General Data Protection Regulation (GDPR). Article 35 GDPR obliges private controllers to execute a Data Protection Impact Assessment (DPIA) prior to deploying automated decisions on humans. Assessing potential fundamental rights impacts is part of that DPIA. The objective of this article is to provide private controllers with a practical approach for a DPIA to automated decision-making to detect potential impacts on fundamental rights. The approach indicates levels of impacts and types of measures a controller should consider to achieve an appropriate risk management. The impact assessment is based on four benchmarks: (i) to identify fundamental rights potentially at risk; (ii) to identify risks occurring in their ADM systems at design stages and during operation; (iii) to balance fundamental rights risks and controller interests involved; and (iv) to establish to what extent data subjects exercise control over data processing. By responding to the benchmarks, controllers identify risk levels that indicate the type of measures that should be considered to achieve fundamental rights compliant ADM. This approach enables controllers to give account towards data subjects and supervisory authorities about envisaged risk management to potential impacts on fundamental rights. The proposed approach seeks to foster compliant, fair, and transparent automated decision-making.
The technological infrastructures enabling the collection, processing, and trading of data have f... more The technological infrastructures enabling the collection, processing, and trading of data have fuelled a rapid innovation of data governance models. We differentiate between macro, meso, and micro level models, which correspond to major political blocks; societal-, industry-, or community level systems, and individual approaches, respectively. We focus on meso-level models, which coalesce around: (1) organisations prioritising their own interests over interests of other stakeholders; (2) organisations offering technological and legal tools aiming to empower individuals; (3) community-based data intermediaries fostering collective rights and interests. In this article we assess these meso-level models, and discuss their interaction with the macro-level legal frameworks that have evolved in the US, the EU, and China. The legal landscape has largely remained inconsistent and fragmented, with enforcement struggling to keep up with the latest developments. We argue, first, that the success of meso-logics is largely defined by global economic competition, and, second, that these meso-logics may potentially put the EU's macrolevel framework with its mixed internal market and fundamental rights-oriented model under pressure. We conclude that, given the relative absence of a strong macro level-framework and an intensive competition of governance models at meso-level, it may be challenging to avoid compromises to the European macro framework. This paper is part of Governing "European values" inside data flows, a special issue of
Uploads
Papers by Heleen Janssen