Papers by Esther Keymolen

Regulation & Governance
Blockchain is employed as a technology holding a solutionist promise, while at the same time, it ... more Blockchain is employed as a technology holding a solutionist promise, while at the same time, it is hard for the promissory blockchain applications to become realized. Not only is the blockchain protocol itself not foolproof, but when we move from “blockchain in general” to “blockchain in particular,” we see that new governance structures and ways of collaborating need to be developed to make blockchain applications work/become real. The qualities ascribed to (blockchain) technology in abstracto are not to be taken for granted in blockchain applications in concreto. The problem of trust, therefore, does not become redundant simply through the employment of “trustless” blockchain technology. Rather, on different levels, new trust relations have to be constituted. In this article, we argue that blockchain is a productive force, even if it does not solve the problem of trust, and sometimes regardless of blockchain technology not implemented after all. The values that underpin this seem...
Policy design and practice, Oct 1, 2023

Springer eBooks, 2023
Civil society around the world has called for data-driven companies to take their responsibility ... more Civil society around the world has called for data-driven companies to take their responsibility seriously and to work on becoming more fair, transparent, accountable and trustworthy, to name just a few of the goals that have been set. Data ethics has been put forward as a promising strategy to make this happen. However, data ethics is a fuzzy concept that can mean different things to different people. This chapter is therefore dedicated to explaining data ethics from different angles. It will first look into data ethics as an academic discipline and illustrate how some of these academic viewpoints trickle down in the debate on data science and AI. Next, it will focus on how data ethics has been put forward as a regulatory strategy by datadriven companies. It will look into the relation between law and ethics, because if in this entrepreneurial context data ethics is not properly embedded, it can be used as an escape from legal regulation. This chapter will end with a reflection on the future relation of data ethics and data science and provide some discussion questions to instigate further debate.
Amsterdam University Press eBooks, Dec 31, 2014
Ethische Perspectieven, 2014
Tilburg law review, 2023
In Professor Yeung's insightful and much-needed article 'The New Public Analytics as an Emerging ... more In Professor Yeung's insightful and much-needed article 'The New Public Analytics as an Emerging Paradigm in Public Sector Administration', the focus-rightfully so-is on the use of data analytics as a form of computational analysis in the context of public administration. It delves into the question of how the turn to data-driven approaches in order to inform and even automate public sector decision-making, may bring along dangerous anomalies. In this reflection on Yeung's article, the focus will not directly be on the technological aspects of New Public Analytics (NPA), but it will shift the focus to the human side of this new paradigm. In the end, this reflection will still be about technology, but it will take a-I hope fruitful-detour, by approaching technology through its deep-rooted connection with human life.

AI and ethics, Jan 9, 2023
While people are increasingly dependent on tech companies to live a flourishing life, numerous in... more While people are increasingly dependent on tech companies to live a flourishing life, numerous incidents reveal that these companies struggle with genuinely taking the interests of customers to heart. Regulators and companies alike acknowledge that this should change and that companies must take responsibility for their impact. If society is to benefit from these innovations, it is paramount that tech companies are trustworthy. However, it is unclear what is required of tech companies to be recognized as trustworthy. This vagueness is risky, as it may lead to ethics washing and an ill-founded sense of security. This raises the question: what should tech companies do to deserve our trust? What would make them trustworthy? This article critically analyzes the philosophical debate on trustworthiness to develop a trustworthiness account for tech companies. It concludes that for tech companies to be trustworthy they need to actively signal their trustworthiness through the design of their applications (1), nurture techno-moral competences and practical wisdom in tech employees (2) and go beyond legal compliance (3).

Data & policy, 2022
Fiduciary agents and trust-based institutions are increasingly proposed and considered in legal, ... more Fiduciary agents and trust-based institutions are increasingly proposed and considered in legal, regulatory, and ethical discourse as an alternative or addition to a control-based model of data management. Instead of leaving it up to the citizen to decide what to do with her data and to ensure that her best interests are met, an independent person or organization will act on her behalf, potentially also taking into account the general interest. By ensuring that these interests are protected, the hope is that citizens' willingness to share data will increase, thereby allowing for more data-driven projects. Thus, trust-based models are presented as a win-win scenario. It is clear, however, that there are also apparent dangers entailed with trust-based approaches. Especially one model, that of data trusts, may have far-reaching consequences. Policy Significance Statement In an increasingly data-driven society, it is of utmost importance to develop and maintain reliable data management structures. Recently, we have witnessed the arrival of trust-based data management models, such as: information fiduciaries, data curators and stewards, and data trusts. It is crucial that policymakers understand the advantages as well as the pitfalls of these models in order to enable a responsible uptake. This article provides researchers and policymakers with in-depth knowledge on the functioning of these trust-based models. In particular, it will suggest several principles that should be taken into account when the data trust model, which this article foresees to have the most impact, is implemented. These insights will enable policymakers to make well-considered choices both on the policy as well as on the practical level.

The HLEG has taken on the ambitious project of developing general guidelines for ethical AI and h... more The HLEG has taken on the ambitious project of developing general guidelines for ethical AI and has, a first step in this process, published a draft document for stakeholders to review and comment upon. In this draft document, the group builds upon, and rightly so, a range of existing frameworks, principles and manifestos. It proposes to centre the guidelines on the concept of Trustworthy AI. They elaborate this concept in three sections that each address different levels of abstraction: ethical purpose rooted in fundamental rights, technical and non-technical methods, and an assessment list. We would like to congratulate the HLEG on this first step in a complex and multifaceted process and complement the group on finding a shared basis to further build upon. In particular, we welcome the rights-based approach that HLEG chose to pursue, as it roots the guidelines in shared values and principles within Europe while at the same time aligning them with many of the existing guidelines. Moreover, we were pleased to see the substantive definition of AI as it is outlined in the document published in parallel with the guidelines and summarized in the draft document. In particular, by distinguishing between AI as a technology and artefact designed and deployed by human beings on the one hand and AI as a scientific discipline on the other, the authors have managed to highlight the extensiveness and heterogeneity of AI. They have also signalled the human agency and work that is involved in making these AI systems function. The focus in the definition on the predetermined goals and parameters provides regulators something to work with. The HLEG also brings the discussions on ethical AI a step forward by not only focusing on rights, principles and values, but also on the implementation and embedding of the technology. The ambition to provide concrete tools and methods for policy makers, developers, and citizens is needed to bring ethical AI into practice and we encourage further work in this direction. As the HLEG has explicitly asked for critical feedback we would like to offer a few suggestions and comments for the further improvement of the document. We will first provide some general comments and then go into more specific comments per section of the guidelines document.
Journal of Human-Technology Relations
In this article, we examine whether ChatGPT is trustworthy and use our conversation with ChatGPT ... more In this article, we examine whether ChatGPT is trustworthy and use our conversation with ChatGPT as a pivot for the larger conversation concerning trustworthy AI. Through the example of our conversation with ChatGPT, we argue that the development of trustworthy AI requires keeping the best interests of users at heart. In the process, we emphasize the distinction between trusting ChatGPT and trusting the information provided by it. Lastly, we highlight the role of critical inquiry and acknowledgement of functional limitations in fostering trust in AI systems.

This essay aims to unravel the reason why policy makers –and others as well- persistently believe... more This essay aims to unravel the reason why policy makers –and others as well- persistently believe that Big Data will make the future completely knowable and consequently solve a myriad of societal problems. Based on insights deriving from the philosophy of technology, it will be argued that although human life is always 'under construction', nevertheless, there exists a Utopian longing for a final ground that contemporary technology should provide us with. This one-sided belief in the power of technology makes people blind for the unforeseen consequences technology may have. Technology, and more specifically Big Data, can only serve as a temporary shelter, which time and time again human beings will have to improve and alter. Moreover, this all-encompassing desire for certainty and safety is not as desirable as it may seem at first sight. After all, a life stripped from its complexity, may turn out to be a boring life.

Gezichtsherkenningstechnologie wordt ingezet om op basis van digitale beelden (bijvoorbeeld een f... more Gezichtsherkenningstechnologie wordt ingezet om op basis van digitale beelden (bijvoorbeeld een foto of video), gezichten of gezichtskenmerken te herkennen. De technologie wordt al enige tijd op beperkte schaal ingezet door overheden voor opsporing en beveiliging, maar is sinds kort ook beschikbaar voor bedrijven en burgers. Omdat het aannemelijk is dat gezichtsherkenningstoepassingen in de nabije toekomst op aanzienlijke schaal beschikbaar zullen zijn voor zowel burgers als bedrijven, is het noodzakelijk om in kaart te brengen of, en, zo ja, welke aanpassingen aan het huidige juridische raamwerk en aan andere reguleringsinstrumenten nodig zijn om de privacy van de burger te beschermen. Daarbij is het van belang om op te merken dat dit onderzoek zich uitsluitend richt op het gebruik van gezichtsherkenningstechnologie in horizontale relaties: relaties tussen bedrijven en burgers en tussen burgers onderling. De inzet van gezichtsherkenningstechnologie in verticale relaties, dat wil ze...
Uploads
Papers by Esther Keymolen
Online personalization techniques make it possible for companies to adapt their interface to the profile of an individual user. Large databases filled with personal data ensure this personalised-information retrieval. Algorithms go through these databases, looking for correlations and patterns. Based on past behaviour of online users, they come up with a prediction about future preferences.
Obviously, online personalization has many advantages. For instance, it can deliver information a user on its own would not come up with. This information in turn can enable empowerment and strengthen personal development. Moreover, receiving information already focussed on core interests is more efficient than executing random searches.
However, these personalization techniques can also negatively affect users. Eli Pariser coins the term Filter Bubble to refer to this “unique universe of information for each of us” (Pariser 2011: 9). This bubble does not only reflect someone’s identity, but also pre-sorts the choices one has, and at the same time moulds future actions.
Taking Pariser’s thesis one step further, I will argue that this filter bubble leads to a moral bubble. First, by feeding users a string of information that confirms their initial beliefs and inclinations, there is the risk of losing sight of a multi-layered reality. Without being confronted with other beliefs, it becomes more difficult to comprehend the motives of other persons. Second, there is a problem of opaqueness. Because users do not know the grounds on which the data is filtered, it becomes impossible to reflect upon the presented information. As a consequence, there is barely any room for moral repositioning.
Online personalization can hamper normative reflection. The moral change activated by this upcoming technique can therefore be characterized as establishing moral stagnation.
By way of conclusion, I will explore means to avoid this stagnation. A suggestion could be to replenish the interface with programmed serendipity – portions of information not based on personal preferences – and to allow users access to their profiles and settings.
References
Pariser, E. (2011), The filter bubble : What the internet is hiding from you (New York: Penguin Press) 294 p.
Pariser, E.(2011), The filter bubble : What the internet is hiding from you (New York: Penguin Press) 294 p.
Swierstra, T. and Waelbers, K. (2012), 'Designing a good life: A matrix for the technological mediation of morality', Science and Engineering Ethics, 18 (1), 157-72.
Governments, companies, and citizens all think trust is
important. Especially today, in the networked era, where we
make use of all sorts of e-services and increasingly interact and
buy online, trust has become a necessary condition for society to thrive. But what do we mean when we talk about trust and how does the rise of the Internet transform the functioning of trust? This books starts off with a thorough conceptual analysis of trust, drawing on insights from -amongst others- philosophy and sociology to sharpen our understanding of the topic. The book explains how the arrival of large systems – such as the internet- has changed the character of trust which today is no longer based on interpersonal interactions but has become completely mediated by technologies. Based on the layered building plan of the Internet itself, a new conceptual lens called 4 Cs is developed to analyse and understand trust in the networked era. The 4Cs refer to the 4 layers which all have to be taken into account to assess trust online, namely: context, code, codification, and curation. The 4cs bring together the first- hand experiences of the user (context), the sort of technology that is being used (construction), the legal implication (codification) and business interests (curation) in order to get a clear picture of the trust issues that may arise. In the final part of the book some real-life cases are discussed (digital hotel keys, Airbnb, online personalization) to illustrate how trust –analysed through the 4 Cs lens- might flourish or be challenged in our current networked era.
Esther Keymolen (2016) Trust on the Line. A Philosophical Exploration of Trust in the Networked Era. Wolf Legal Publisher: Oisterwijk.
Chapter 7 of my book Trust on the Line. A Philosophical Exploration of Trust in the Networked Era.
This chapter addresses the role of trust on the platform of Airbnb. The aim of this platform is to connect people who want to rent their house or a room in their house and people who are looking for a place to stay while traveling. AirBnB is a prime example of the shared economy movement. This movement wants to change the economic system, which is based on ownership, to a system that is based on access. In short, it should no longer be important to own things but to have access to them. By cutting out the middleman, in this case the hotel owner, old forms of trust based on reputation and reciprocity can be reinvented. In other words, trust is solely something that is part of the context level. I will, however, argue that this view is based on a utopian belief that technology can restore the indirectness of the ontological distance that is at the centre of every interaction. I will show how not only the construction of the platform is shaping the building of trust, but that also the interests of the curator – AirBnB, a privately owned company - and the codification of AirBnB and several state actors should be taken into account when analysing trust between users of AirBnB.
De publieke sector heeft de afgelopen jaren enthousiast de vruchten van het fenomeen digitalisering geplukt. Nu digitale toepassingen ook bij de overheid een vaste plaats in beleid en bij beleidsuitvoering hebben veroverd en hun alomtegenwoordigheid verder groeit, tekenen zich de fundamentele veranderingen en consequenties voor de burger, samenleving en overheidsinstituties af.
Zo blijken de rollen en posities van overheden en burgers te veranderen en te verschuiven en wordt bovendien duidelijk dat deze veranderingen van betekenis zijn - of: zouden moeten zijn - voor zowel de bestuurlijke inrichting als ook de verantwoordelijkheidsverdeling tussen overheid en burgers.