0% found this document useful (0 votes)
27 views230 pages

Algorithmic Colonization Automating Love

Algorithmic_Colonization_Automating_Love

Uploaded by

andros.res
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views230 pages

Algorithmic Colonization Automating Love

Algorithmic_Colonization_Automating_Love

Uploaded by

andros.res
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ALGORITHMIC COLONIZATION

AUTOMATING LOVE AND TRUST IN THE AGE OF BIG DATA

HAO WANG
Cover art & lay-out by Hao Wang

PhD dissertation, Amsterdam, 2022.


© Hao Wang 2022.
Paranimfen: Gerrit Schaafsma & Marijn Sax

All rights reserved. Save exceptions stated by the law, no part of this publication may be
reproduced, stored in a retrieval system of any nature, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording or otherwise, included a complete or partial
transcription, without the prior written permission of the authors, application for which should be
addressed to author.
Algorithmic Colonization
Automating Love and Trust in the Age of Big Data

ACADEMISCH PROEFSCHRIFT

ter verkrijging van de graad van doctor


aan de Universiteit van Amsterdam
op gezag van de Rector Magnificus
prof. dr. ir. P.P.C.C. Verbeek
ten overstaan van een door het College voor Promoties ingestelde commissie,
in het openbaar te verdedigen in de Aula der Universiteit
op woensdag 21 december 2022, te 11.00 uur

door Hao Wang


geboren te Hunan
Promotiecommissie

Promotor: prof. dr. B. Roessler Universiteit van Amsterdam

Copromotores: prof. dr. R. Celikates Freie Universität Berlin


dr. D. Loick Universiteit van Amsterdam

Overige leden: prof. dr. H.O. Dijstelbloem Universiteit van Amsterdam


dr. T.R.V. Nys Universiteit van Amsterdam
dr. E. Groen-Reijman Universiteit van Amsterdam
prof. dr. H.Y.M. Jansen Vrije Universiteit Amsterdam
prof. dr. T. Matzner Universitaet Paderborn

Faculteit der Geesteswetenschappen


The research for/publication of this doctoral thesis received financial assistance from the China Scholarship Council
(CSC Grant: #201606360144).
“May you always have open breezy spaces in your mind.”1

For Lexi
Voor Lexi
致乐兮

1
A quote from Sanober Khan <https://twitter.com/sanoberrie/status/536544398322827264>
Table of Contents

Introduction .........................................................................................................................................................1
Do Algorithms Make Love More Objectified? ........................................................................................................... 1
Algorithmic Colonization in the Big Data Era ............................................................................................................ 5
Three Dimensions of Algorithmic Colonization ....................................................................................................... 12
The Colonized Lifeworld ..................................................................................................................................... 13
The Colonial Power of Algorithms ...................................................................................................................... 15
The Colonized Subject......................................................................................................................................... 18
Outline of the Thesis ............................................................................................................................................... 20

Part I: Theoretical Background: Colonization Thesis and Surveillance Studies ..................................................... 25

Chapter 1: Colonization of the Lifeworld: A Critical Theory Approach ................................................................. 27


1.1 Habermas’s Colonization Thesis........................................................................................................................ 28
1.2 The Colonization Thesis as an Immanent Critique ............................................................................................ 32
1.3 The Colonization Thesis as a Critique of Technology......................................................................................... 37
1.4 Incorporating Discipline Theory into the Colonization Thesis ........................................................................... 44
1.5 Conclusion ......................................................................................................................................................... 50

Chapter 2: Disciplinary Power in the Algorithmic Society ................................................................................... 53


2.1 The Disciplinary Judgment ................................................................................................................................ 54
2.2 Panopticism: A Traditional Mode of Disciplinary Surveillance .......................................................................... 57
2.3 Three Arguments Against Disciplinary Surveillance .......................................................................................... 60
2.3.1 The Argument of Fluid Institutions ............................................................................................................ 61
2.3.2 The Argument of ‘Data-double’ ................................................................................................................. 63
2.3.3 The Argument of Risk-based Control ......................................................................................................... 64
2.4 Defending Disciplinary Surveillance: Three Responses...................................................................................... 66
2.4.1 Disciplinary Power Beyond Enclosed Institutions ...................................................................................... 66
2.4.2 Beyond the Data Double: The Participatory-disciplinary Subject .............................................................. 69
2.4.3 Discipline Through the Algorithmic Imaginary ........................................................................................... 72
2.5 Towards Algorithmic Discipline in the Big Data Era .......................................................................................... 76
2.6 Conclusion ......................................................................................................................................................... 78

PART II: Case Studies: Algorithmic Love and Trust............................................................................................... 79

Chapter 3: The Algorithmic Colonization of Love Life .......................................................................................... 81


3.1 Love Mediated by ‘Smart’ Algorithms ............................................................................................................... 82
3.2 The Objectification of Algorithmic Love ............................................................................................................ 86
3.3 Exploitation and Manipulation of Love Relations ............................................................................................. 92
3.4 Prescribed Identity in a Filter Bubble ................................................................................................................ 97
3.5 Rethinking Autonomy in the Lifeworld ............................................................................................................ 101
3.5.1 Relational Autonomy ............................................................................................................................... 101
3.5.2 A Dialogical Account of Autonomy .......................................................................................................... 103
3.6 Retaining Dialogical Autonomy in Algorithmic Love ....................................................................................... 106
3.7 Conclusion ....................................................................................................................................................... 110

Chapter 4: The Algorithmic Colonization of Trust .............................................................................................. 113


4.1 Rethinking China’s Social Credit System: Engineering Trust ........................................................................... 114
4.1.1 What is the Real Social Credit System?.................................................................................................... 115
4.1.2 Beyond Orwellian Surveillance ................................................................................................................ 119
4.2 Automating Trust in the Big Data Age ............................................................................................................ 123
4.3 Algorithmic Trust as Reliance: A Culture of Objectification ............................................................................ 129
4.4 Algorithmic Trust as Social Compliance .......................................................................................................... 134
4.5 Algorithmic Trust as an Ideology .................................................................................................................... 139
4.6 Thinking of Other Possibilities: The Value of Distrust ..................................................................................... 143
4.7 Conclusion ....................................................................................................................................................... 149

5 Conclusion: Towards an Open and Flexible Algorithmic Society ..................................................................... 151


5.1 A General Principle in Designing Algorithmic Systems .................................................................................... 151
5.2 Objectification: Revisiting Algorithmic Love and Trust ................................................................................... 152
5.3 Three Proposals............................................................................................................................................... 154
5.3.1 Freedom to be Off ................................................................................................................................... 155
5.3.2 Designing Flexibility ................................................................................................................................. 156
5.3.3 Transparency and Non-manipulation ...................................................................................................... 159
5.4 Conclusion ....................................................................................................................................................... 162

References ....................................................................................................................................................... 163

Summary .......................................................................................................................................................... 197

Samenvatting ................................................................................................................................................... 201

Author Contributions ....................................................................................................................................... 207

List of Publications and Grant(s) ....................................................................................................................... 209

Acknowledgments ............................................................................................................................................ 211


Introduction2

“The quest for certainty blocks the search for meaning.”


Erich Fromm, Man for Himself (2006, 45)

“Machine processes replace human relationships so that certainty can replace trust.”
Shoshana Zuboff, The Age of Surveillance Capitalism (2019, 351)

Do Algorithms Make Love More Objectified?

Jessica LaShawn was a flight attendant in the United States. She went on a date with someone who
was seemingly perfect, as he “was tall, from a religious family, raised by his grandparents just as
she was, worked in finance and even had great teeth” (Silver-Greenberg 2012).3 Even if this was
just the first date, she could not help dreaming of marrying him, wearing a shining diamond ring
and living happily for the rest of her life. But suddenly, the man asked her: “What’s your credit
score?” LaShawn was stunned: “It was as if the music stopped… It was really awkward because
he kept telling me that I was the perfect girl for him, but that a low credit score was his deal-
breaker” (ibid.). A few days after the date, the man apologized in a text message, repeatedly
explaining that the problem was not LaShawn: “it was (her) credit score” (ibid., emphasis added).
LaShawn’s story may be confusing for many of us. It goes against our intuitions, asking us
to associate credit scores with love and not just one’s ability to borrow money. However, her story
is not that uncommon at all. Credit scores have become increasingly popular in determining

2
This chapter is based on my three articles: Algorithmic Colonization: A New Critical Approach in Understanding
the Algorithmic Society (To be submitted); Wang, H. (2022). Transparency as Manipulation? Uncovering the
Disciplinary Power of Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w; Wang, H. (2022). Algorithmic Colonization of Love: The Ethical
Challenges of Dating App Algorithms in the Age of AI. Techné: Research in Philosophy and Technology. [Accepted]
3
Retrieved from <https://www.ndtv.com/business/even-cupid-wants-to-know-your-credit-score-315242>

1
romantic relations.4 In 2017, the personal finance website Bankrate.com conducted a survey of
1,000 adults, asking how credit scores could affect romantic relations. It found that approximately
“42 percent of Americans indicate that knowing someone’s credit score could be a deciding factor
when dating”5 (Frankel 2017). For example, Lauren Dollard worked as an assistant in Houston.
Her credit score was low, leading her boyfriend to reject marrying her, until she could boost her
credit score (Silver-Greenberg 2012). Another example was found in a famous credit forum
(myFICO Forums)6, where one member posted a thread asking what she should do after knowing
that her partner had a poor credit score. Most of the members’ suggestions were very simple: “Run
away from him as soon as possible!”
To meet this trend, some new dating platforms have started to prioritize credit scores as the
foremost factor in love relations. For instance, a particular dating application called
CreditScoreDating presents itself as a place ‘where good credit is sexy’, and attempts to match
subscribers based on their credit scores.7 Generally, a score below 660 is often seen as a ‘red flag’,
while scores above 800 are considered exceptional.8 Here is a guide posted on the website to assist
with credit score dating:9

800-850 is “MARRIAGE POTENTIAL DING DING DING”


750-800 is “take him/her home to Mom”
700-750 is a “fixer-upper”
650-700 is “fun for a night out, maybe, but bring cash”
600-650 is “keep lookin’!”

4
In the U.S., credit scores are closely identified with FICO Scores, since the FICO system has largely dominated the
entire credit scoring industry for decades. In this section, the cited studies on ‘credit scores’ can be seen essentially as
research on the FICO system.
5
See <https://www.bankrate.com/personal-finance/credit/money-pulse-0517/>. A year before, that number was
nearly 40 percent.
6
See <https://ficoforums.myfico.com/t5/Relationships-and-Money/How-to-Cope-W-Financially-Damaged-
Partner/td-p/6409647>
7
See the headline of the website for CreditScoreDating.com, at <https://www.creditscoredating.com/>.
8
See <https://ficoscore.com/education/>.
9
See the ‘About Us’ section at CreditScoreDating.com, at <https://creditscoredating.com/about>.

2
anything below 600 is “RUN because they won’t even get a car loan, probably, and how
embarrassing will that be at the PTA meetings?”
200 is “this person is just pulling your leg and is really royalty”

Usually, a credit score is not typically something that the general public is concerned with, as the
issue of credit is commonly restricted to financial and economic situations. But all these stories
show that credit evaluation has come to be used to judge people’s social conduct – even in the
most intimate of relations, love, an area that was previously considered improper for credit scoring.
More importantly, those stories share a common feature: “Poor credit, poor mate. It’s just that
simple!”10 In other words, credit scores are not only important in deciding whether an intimate
relationship is worth pursuing, but they are in many cases the number one factor to consider. And
if romantic relations are decided by credit scores, it seems that there is no room for explanation
and conversation. In Lashawn’s story, the man does not even try to ask for more details about the
reasons behind her low score.
So, we may ask why love relations mediated by credit scores become so objectified?
Objectification in this case means the process of reducing daters to objects whose value is not
based on their personality, capacity, and inner qualities, but is instead only decided by a credit
score. People are valued for their numerical scores rather than their qualities as fully-fledged
individuals. A quick explanation of such objectification may be monetary. Admittedly, money is
often cited as crucial for people’s love lives. Many empirical studies have shown that “money is
the leading cause of discord within romantic relationships” (Huntsberger 2021).11 It is money,
rather than “children, sex, in-laws or anything”, that “is by far the top predictor of divorce”
(Jacques 2013).12 If money is so important, it is reasonable for people to consider credit scores
seriously, and even rank it as the deciding factor, since a credit score is commonly seen as a way

10
This comment was made by a reader when asked how credit scores affect romance. See
<https://ficoforums.myfico.com/t5/Relationships-and-Money/How-to-Cope-W-Financially-Damaged-Partner/td-
p/6409647>
11
In that survey, 35 percent of couples fought most often about finances. See <https://www.opploans.com/blog/bad-
credit-dating-dealbreaker/>.
12
See <https://www.k-state.edu/media/newsreleases/jul13/predictingdivorce71113.html>

3
to measure one’s financial situation. However, this issue is far more complex than it is commonly
imagined.
In spite of finances, I would argue that credit scores themselves, driven by algorithms, can
potentially perpetrate and shape objectified behavior in romantic interactions. A credit score looks
simple – it is usually a three-digit number, often ranging from 300 to 850, but the inner workings
of determining a credit score are driven by an automatic algorithm which is composed of a set of
complex codes and mathematical formulae. These scoring systems are designed in a way that is
scientific-looking, and non-negotiable. If a person has a low credit score, they would be
automatically judged by the system (e.g. by a bank), and distributed perks or penalties accordingly,
without further interaction or negotiation. When people are using such credit scores to decide their
love relations, people with low credit scores will be immediately disadvantaged in the dating
market, and the negative effects may last almost forever (or at least until their credit scores
improve). In this sense, such algorithmic systems can structure our romantic interactions by their
own priorities of efficiency, de-contextualization, and non-negotiation. In so doing, an algorithm-
driven scoring system can restrain one’s open-mindedness and make people less communicative
towards mistakes and risks.
This study will not focus solely on credit algorithms, but on the social expansion of all
forms of automatic algorithms. The expansion of credit scoring systems into intimate relations
reflects an even larger trend. As Danielle Citron and Frank Pasquale (2014, 1) point out, the credit
scoring system is a typical example of what they call the ‘Scored Society’. They show that there
is a ‘scoring trend’ in Big Data society, where predictive algorithms are used ‘to rank and rate
individuals’ in countless areas of life (Citron & Pasquale 2014, 1). Various scoring algorithms
have been developed to rank which candidates are the perfect fit for a job position (Black & van
Esch 2020; Kim 2020), how likely individuals are to commit crimes (McKay 2020; Schwerzmann
2021), and how likely potential daters are to be successfully matched (Tuffley 2021). These
scoring systems are often combined with a series of automatic punishments. If Uber drivers have
relatively low scores, for example, they can be immediately punished by the algorithm-driven
platform, which can restrict the number of passengers they are able to drive, or even eventually
remove the drivers from the platform altogether (Chan 2019; O’Connor 2021; Muldoon &
Raekstad 2022).

4
In light of these considerations, my thesis suggests a broader and more profound
phenomenon. This phenomenon is that nowadays, algorithms in general are gradually expanding
their boundaries into the domain of our social interactions and are making our interactive relations
into an automatic enforcement of punishments and incentives, where there is no need for the social
process of promises, dialogue, or shared meaning in the Big Data society. I describe this ongoing
phenomenon as algorithmic colonization, a concept that will be explained in more detail below.

Algorithmic Colonization in the Big Data Era

The new age of Big Data has begun (Yeung 2017; Boyd and Crawford 2012). Big Data refers not
only to the large quantities of data available “to be processed” (Matzner 2016, 199), but more
“about a capacity to search, aggregate, and cross-reference large data sets” (Boyd and Crawford
2012, 663). This means that Big Data technology can sift, sort and analyze massive data sets from
different disciplines very quickly, and easily identify patterns and correlations through a machine
learning process (Cohen 2012; Matzner 2016; Yeung 2017). These patterns and correlations will
be distilled into predictive analytics, which allows interested parties to gain novel perspectives and
insights that are not possible with traditional methods of analysis based on smaller amounts of data.
Big Data is widely considered to “offer a higher form of intelligence and knowledge that can
generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy”
(Boyd & Crawford 2012, 663).
Due to Big Data analytics, our social and daily lives are exposed to algorithmic power, as
part of the loop of algorithmic decisions. In computer science, an algorithm basically follows the
‘if… then… else’ logic (Bucher 2018): “If a happens, then do b; if not, then do c” (Smith 2018).13
An algorithm can thus roughly be viewed as “an ordered set of steps followed in order to solve a
particular problem or to accomplish a defined outcome” (Diakopoulos 2018, 2). With the rise of
Big Data, however, the meaning of the concept of an algorithm has expanded beyond computation
in scientific research, and has become associated with the complex social decision-making
processes used by various automated machines (Matzner 2022; O’Neil 2016; Završnik 2021;

13
See <https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger>

5
Erasmus et al. 2021; Baum et al. 2022). Nowadays, the automated algorithms are silently making
crucial decisions about our life, but most of the time we have little understanding of how they work
(Citron & Pasquale 2014; Broussard 2020; Calo & Citron 2021). We need to ask: What is the risk
of letting algorithms determine our lives?
Over the course of the last decade, the power of algorithms has been critically examined
across a wide scholarly literature, with explorations of how algorithms can result in a range of
insidious effects on our society (for example, Beer 2017; Mau 2018; Susskind 2018; Cheney-
Lippold 2017; Bucher 2018; Zuboff 2019). Many studies have expressed concerns that black-
boxed algorithms can hide the problems of inaccuracy, discrimination, and unfairness in the
decision-making process (Citron & Pasquale 2014; Pasquale 2015; Zarsky 2016; Binns 2016;
Eubanks 2018; Franke 2022). Many writers have expressed their worry that some algorithmic
recommendation systems can ‘hypernudge’ and manipulate behavior in both commercial and
political fields, which may undermine people’s autonomy and the capacity for democratic
participation (Yeung 2017; Susser, Roessler & Nissenbaum 2019; Zuboff 2019; Sax 2021). Some
scholars also examine the political polarization caused by algorithm-driven personalized targeting
(Tufekci 2014). Algorithmic sorting and classification have also been criticized as potentially
exacerbating existing discrimination and social inequalities (Lyon 2006; Crawford 2021). Rooted
in new developments in Marxist thought, some scholars have also examined the exploitation of
labor on algorithm-powered digital platforms (Scholz 2013; Fuchs 2017).
All of these critiques are relevant and warranted, but it seems that they may overlook an
ethical evaluation of the power when algorithms are used to judge and shape our most emotional
and interactive experience. My thesis refers to this particular power as a ‘colonial’ one. The term
‘colonial’ here is not used in the literal sense of ‘colonial masters coming into a tribal society’, but
is rather used in the Habermasian sense, describing the general process of the system continually
encroaching on the lifeworld (Habermas 1987, 355). Algorithms are profoundly and deeply
shaping our social interactions, and some essential elements of the lifeworld, such as love,
autonomy, and trust, are now becoming largely mediated by algorithms. In a society where
algorithms are almost everywhere, people may tend to take the situation for granted, and let
algorithms and artificial intelligence (AI) keep shaping our social communications and our
intimate relations. However, we should take the impact of widespread algorithms more seriously.
As I will argue in the course of this study, algorithmic colonization can have a deep influence on

6
the self and on our social relations, and contribute to a culture of objectification in the Big Data
society.
Before developing my own theory of algorithmic colonization, I would like to examine
some already-existing ideas and frameworks that are relevant to critically understanding the
‘colonial’ power of algorithms. Despite not explicitly naming it as such, many scholars have
recognized some parts of the phenomenon of algorithmic colonization in their critical studies on
data and algorithms (for example, Kane 2014; Heyman & Pierson 2015; Gilbert 2018; Mau 2019;
Zuboff 2019). Steffen Mau, in his Metric Society (2019), does describe the phenomenon in
Habermasian terms: “The spread of the numerical medium is also driving forward ‘the colonization
of the lifeworld’ by instrumental concepts of predictability, measurability and efficiency” (2019,
6). For Mau, the term ‘colonization’ may not necessarily be negative, but more of a sociological
description, in which colonization is identified with his idea of ‘the quantification of the social
sphere’ (2019, 2). In Chromatic Algorithms (2014), Carolyn Kane does not directly use the word
‘colonization’, but she does use the phrase ‘the algorithmic lifeworld’ to describe how
computerized algorithms colonize our subjective and qualitative experience of color, whereby the
color is transformed “from a qualitative phenomenon to a code, formula, quantum, or mathematical
equation” (Kane 2014, 36).
Some recent work on critical data has explicitly called for a new branch of critical study
on Big Data analytics, using the term ‘data colonialism’ (Kotliar 2020; Thatcher et al. 2016;
Couldry & Mejias 2019; Calzati 2021; Ferreira et al. 2022). Data colonialism compares the
relentless quantification of society to “the predatory extractive practices of historical colonialism”
(Couldry & Mejias 2019, 337). Such a metaphorical comparison is not without controversy. After
all, it appears that data extraction is not wholly dependent on deception and coercion, but often
occurs somewhat by ‘contract’ between users and companies. Couldry and Mejias do admit that
their use of ‘data colonialism’ is very different from historical colonialism. The process of
datafication can be seen fundamentally as a kind of appropriation and extraction of resources and
raw material, but the resources are not historical colonialism’s land, labor, or bodies (Couldry &
Mejias 2019, 338). Instead, what is extracted is everyday human life and social relations, via a
process of transformation and assimilation into sets of data relations (Couldry & Mejias 2019, 338).
Some previously non-quantified experiences, such as dating relations, have now been quantified,
creating infinitely exploitable data for capitalist exploitation. In this light, data colonialism refers

7
to the process whereby “capitalism colonizes previously noncommodified, private times and
places” (Thatcher et al. 2016, 994).
Translating human life into commodified data relations is not new, but rather a fact that
has been critically examined by many scholars. For example, Shoshana Zuboff (2015) has heavily
criticized the process of transforming personal life into behavioral data and commercial products.
For her, the technology of Big Data is foundational to surveillance-based capitalism. Surveillance
capitalists maximize their profits via the continually monitoring the real-time flow of people’s
daily lives. As such, there is a drive toward more and more data extraction and analysis in the age
of surveillance capitalism (2015; see also 2019). For Zuboff, personal data is not a commodity that
can be traded arbitrarily, since it is essentially constituted by human experience (Zuboff 2019).
Similarly, Beate Roessler also argues that these data “were supposed to belong to and stay in the
sphere of social relations” and should not be commercialized, since they are necessary for an
individual to develop personhood and social relations with others (Roessler 2015, 149).
What makes data colonialism different may be that it seems to highlight the persistent
exploitative power in datafication. The term ‘colonialism’ is connected to cruel practices of
domination and exploitation in human history. As Couldry and Mejias emphasize, ‘data
colonialism’ is not used to describe the physical violence and force that was often seen in historical
colonialism, but at the same time, the frame of colonialism is not used “as a mere metaphor”
either.14 Instead, the term is used “to refer a new form of colonialism distinctive of the twenty-first
century”, which involves a widespread dispossession and exploitation of our human lives (Couldry
& Mejias 2019, 337).15 By using the term ‘data colonialism’ instead of ‘data capitalism’ or some
similar formulation, the crueler side of “today’s exposure of daily life to capitalist forces of
datafication” is highlighted (ibid.). As Thatcher et al. suggest, “data colonialism has the advantage

14
A similar point is given by Abeba Birhane (2020) in her article “Algorithmic Colonization of Africa”: “In the age
of algorithms, this control and domination occurs not through brute physical force but rather through invisible and
nuanced mechanisms such as control of digital ecosystems and infrastructure. Common to both traditional and
algorithmic colonialism is the desire to dominate, monitor, and influence social, political, and cultural discourse
through the control of core communication and infrastructure mediums” (391).
15
As Couldry and Mejias argue, the new data colonialism is not operated by only one pole (‘the West’ in historical
colonialism), “but at least two: the United States and China” (2019, 337). Such data colonialism works not only
globally, but also “on its own home populations” (ibid.). Think of TikTok, which has expanded its data-colonial power
to almost everywhere in the world.

8
of highlighting the power asymmetries inherent in contemporary forms of data commodification”
(2016, 992).
Let me now point out three main limitations of the theoretical framework of data
colonialism. First, data colonialism mainly focuses on the process of datafication, describing how
everyday human life is quantified. However, datafication is only part of the whole story. Equally
important is how such data is fed into the algorithms that are used to determine the crucial facets
of an individual’s social life. For example, data colonialism may be concerned with the calculation
of one’s financial creditworthiness, where some credit bureaus are collecting individuals’ social
data directly from social media posts. But it is equally problematic when such social data are
calculated by algorithms to govern and manipulate social behavior, as is done with China’s Social
Credit System. That means that the colonization phenomenon of Big Data should be seen as
twofold: it is not only about the overwhelming collection of personal data from all parts of our
daily lives, but also about how data-based algorithms are determining our social lives and actively
displacing our social interactions.16 For data colonialism, the focus is largely on how social lives
are transformed into data for capitalist profits; however, not many analyses on behavior
modification exist, which is more related to algorithmic decision-making.17
The second limitation is that the concept of data colonialism assumes a passive and
reductionist mode of analysis with respect to digital subjects, which is insufficient to describe the
complexity of a colonized subject. As Couldry and Mejias suggest, data colonialism results in the
proliferation of ‘colonized subjects’, whereby a human being is transformed into a kind of
exploitable ‘data double’ (Couldry & Mejias 2019, 343). The term data double is “a concept that
somewhat retains the passivity of data subjects” (Calzati 2021, 923). This term implies that actual
individuals become less relevant as subjects in the digital society than the individuals’
representations – that is, the knowledge produced from the data analysis of individuals (Galic et
al. 2018). Subjects are thus defined and reduced to passivity, simply waiting to be evaluated and
sorted by institutions. This over-emphasis on the data double has displaced the significant

16
Admittedly, for data colonialism, the process of datafication in general seems to include both data collection and
algorithmic influence. But the present theory mainly focuses on how our social lives have been quantified through
algorithms, and not that much on how these data-driven algorithms influence our behavior and social relations.
17
Firms mine the massive data sets about human life not only to improve their services, advertisement relevance, or
predictive capacity, but also to identify and modify individuals’ behavior (Kitchin 2014; Zuboff 2019).

9
participatory experience of actual individuals. Surveillance capitalists are now trying to modify
people’s behaviors to reap the greatest benefit, but such behavior modification often depends on
people’s active participation, motivated by mechanisms like gamification to “make them dance”
(Zuboff 2019, 293). Thus, a colonized subject is more complex than just a data double, and to
understand such a subject, we need to recognize the nuanced tension between the asymmetry of
power and the participatory subjects.
Lastly, data colonialism is understood only as a precondition for capitalism, which fails to
recognize a deeper logic in the nature of our communications. Couldry and Mejias describe in
detail how the quantification of society occurs, but they only regard the purpose of translating
human life into data as a way that capitalist force operates, driven only by profit imperatives.18
They do not probe deeper, to discover how such social quantification is more rooted in the
rationalization of our lifeworld. As Calzati points out, the datafication of social life is driven by
“the purpose of objectification and mastering” (2021, 924). We translate human experience into
data sets not only because it is easier for companies to make profits, but also because data is
“considered as the best language through which to make sense of the world” (ibid.). In this sense,
social quantification also means that “agents [are] perpetrating a rationalist worldview that makes
ultimately mastering and calculability possible” (2021, 925). So, to unpack data colonialism, we
need to understand how such a rationalist worldview is developed, and why algorithmic rationality
can restrain a subject’s social imagination of other possibilities in the first place.
Unlike data colonialism, my thesis will mainly draw on a reformulation of Jürgen
Habermas’s colonization thesis, supported by the critical theory of technology and surveillance
studies, to develop a critical framework of algorithmic colonization in the Big Data era. This
framework will not only analyze the quantification of society, but also explore how algorithms
shape and normalize human social interactions. It will examine the complex meanings of a
colonized subject, rather than a passive and reductionist one. Instead of only focusing on capitalist
imperatives, the framework of algorithmic colonization will capture the inherent logic of
algorithmic imperatives and how such imperatives are gradually crowding out the communicative
actions that are necessary for our cultural, social, and personal development.

18
Such appropriation exposes people under the control of capital. So, for Couldry and Mejias, “data colonialism will
provide the preconditions for a new stage of capitalism”, just as historical colonialism “provided the essential
preconditions for the emergence of industrial capitalism” (2019, 337).

10
Of course, some assumptions underlying Habermas’s modernization theory – the idea that
all societies follow a trajectory from ‘primitive’ to ‘modern’ and that the lifeworld is assumed not
to be influenced by power – are rather contested. His historical narrative of the lifeworld can not
only be problematically linked to a sort of ‘West-centrism’ (Rosenberg 2006), but also understates
the negative influence of patriarchal power in the lifeworld (Fraser 1985; Loick 2014). Fortunately,
the ideas I put forward in this study do not depend on Habermas’s historical and romanticizing
assumptions about the lifeworld. Instead, I follow a more minimalist social-theoretical as well as
normative understanding of the lifeworld, so as to critically analyze the tension between the
lifeworld and algorithmic systems. This understanding of the lifeworld is sufficient, because it
does presuppose neither a developmentalist historical narrative nor a naïve conception of that
lifeworld as a sphere beyond power. Instead, the lifeworld constitutes a normative potential for
communicative actions, and an open exploration of possibilities.
To make this clearer, understanding the lifeworld as an open exploration of possibilities
does not mean that it is sort of an ‘empty space’ only for self-making. Instead, the normative
concept of the lifeworld also implies social justice, equality, and other justified social values. This
idea is twofold.
First, the open exploration of possibilities as a kind of communicative action is not random,
but structured and guided by the normative requirements of ‘communicative action’. The term
‘communicative action’ is normative, because it does not simply mean that actions need linguistic
interactions, but that these interactions aspire to be based on mutual understanding among all
participants, and each participant’s beliefs are developed on their own in an uncoerced and non-
manipulated manner (Habermas 1987, 86, 213). In this light, the lifeworld, which provides
resources as well as a shared background for communicative actions, is also normative in itself,
requiring not only non-manipulation but also justice, freedom, and other social values to make
communicative actions possible.
Second, the lifeworld as the open exploration is also an ongoing transforming process to
produce and reproduce justice and other social values. The lifeworld not only provides resources
for communicative actions to take place, but is itself reproduced and updated through
communicative actions (Habermas 1987, 119). The lifeworld provides a supportive space for
humans to develop their capacity for communicative actions, so that they can keep open and
respectful interactions and explorations with others. Through such open explorations as

11
communicative actions, social justice, equality, freedom, and other social values can be realized
and reproduced in our society through dialogues, debates, negotiations, and some necessary forms
of resistance.
Hence, the lifeworld is normative not only because it constitutes normative values like
social justice, etc. to support communicative actions and open exploration, but also because it
produces and reproduces social justice and other social values through communicative exploration.
For the remainder of my thesis, I will often use ‘open exploration of possibilities’ or ‘open
exploration’ as a matter of convenience, but the point is that such an exploration is based on social
values, and the ‘possibilities’ are not arbitrary but structured in a normative way.
In Habermasian terms, algorithms act as processes that are part of the system that encroach
into, and thereby transform, our lifeworld according to their own priorities of efficiency,
automation, and de-contextualization. The thesis of algorithmic colonization emphasizes that the
lifeworld reproduces itself only through communicative actions, and this process of participatory
interaction should not be distorted or crowded out by commercial or technical imperatives. In this
light, people should be allowed to develop their personalities, social relations, and shared cultural
meanings through free and open interactions with others. When the lifeworld is colonized, the
communicative infrastructure will be distorted and even replaced by instrumental rules, which may
collapse the symbolic reproduction of the lifeworld in cultural, social, and personal domains. With
this framework in mind, I will now describe three dimensions of algorithmic colonization.

Three Dimensions of Algorithmic Colonization

My central argument is that while algorithmic systems can be beneficial by making our lives more
efficient and convenient, we should also be cautious about the risk of allowing algorithms to shape
and disrupt our communicative relations, which are in need of open-minded, respectful and
explorative interactions. Such systems may structure our flowing and meaningful interactions by
their own priorities of automation and commodification. This over-expansion would restrain
people’s open exploration of possibilities, causing the colonization of the lifeworld in three realms.

12
The Colonized Lifeworld

Algorithms have the power to construct a new reality – an algorithmic lifeworld. In this dimension,
algorithmic colonization can mean that predictive algorithms gradually transform our
quintessentially contextual and qualitative experience into a codified process that decontextualizes
our interactive experience. In this sense, algorithmic colonization is used not to analyze a specific
technology, but rather a bigger picture regarding how the datafication and quantification of society
creates a new cultural world.
Datafication is not digitization – digitization is a process of representing information in
digital formats that can be read and managed by a computer. For instance, the process of book
digitization is to scan physical books in order to create digital versions of them. In contrast,
datafication is more than a digitizing process. As Mayer-Schönberger & Cukier (2013, 78) explain,
“to datafy a phenomenon is to put it in quantified form so that it can be tabulated and analyzed”.
That means that datafication is more about making digital text “indexable and thus searchable”
(Mayer-Schönberger & Cukier 2013, 84). Such a process will make humans’ everyday life
“susceptible to being processed via forms of analysis that could be automated on a large-scale”
(Mejias & Couldry 2019, 2). For Mejias and Couldry, datafication means “the wider
transformation of human life so that its elements can be continual source of data” (ibid.).
Such datafication or quantification creates a new environment or lifeworld, where people
gradually see and experience the world not through a direct connection but mediated via data and
algorithms. In this sense, we are gradually inhabiting an algorithmic lifeworld, where automated
algorithms frame “what is (ontology) and how we can know what is (epistemology)” (Wilf 2015).19
For example, in Chromatic Algorithms, Kane gives the example of infrared digital technologies,
which rely on capturing and translating an entirely invisible heat radiation into ‘heat maps’ (Kane
2014, 218). People cannot see and experience the color in a direct and continuous way, but only
through a kind of ‘algorithmic perception’ that is only possibly understood by a digital transcoding
process (19).
Such an algorithmic lifeworld is a simulation of reality that is based on data reduction and
decontextualization, in which our quintessentially contextual and qualitative experience is

19
See <https://www.publicbooks.org/what-world-whose-algorithms/>

13
transformed into a mechanized process. In José van Dijck’s words, our human connectedness to
the world and other people has been reduced to automated ‘connectivity’ engineered by formal
and manipulable algorithms (van Dijck 2013, 12). On Facebook, for instance, people send their
birthday wishes to their friends not through their own remembering but only instructed by
automatically sent birthday notifications on Facebook’s News Feed (Heyman & Pierson 2015).
Some may argue that this function of birthday notifications on Facebook’s News Feed is similar
to when we write down our friends’ birthdays in diaries, since they are both used to ‘remember’
friends’ birthdays. Yet there is still a subtle difference: the former is an entirely automatic process
driven by Facebook’s algorithms, which does not require users to actually ‘remember’, while the
latter needs people to write names and dates down, the action of which is itself a way of active
remembering.
Given such an algorithmic lifeworld, algorithms transform “the communicative
infrastructure of the lifeworld” (Habermas 1987, 375). Our cultural world “becomes mediated by
an ongoing interaction between us and the computational process” (Gilbert 2018, 93). It is
reproduced not through communications, but by an algorithmic mediation that is controlled and
shaped by corporations and governments. It is only the data and algorithms that construct who we
are and what we should be doing. We connect to others not through interactive social norms, but
automatic codes and mathematical rules. In this way, communicative rationality is being
increasingly eroded and even replaced by technological or algorithmic rationality. Our social
interactions will be crowded out or reduced to only a technical exchange. The mechanical process
oriented towards efficiency, convenience and standardization will take the place of interactive and
qualitative experience.
This algorithmic lifeworld may disallow meaningful interactions. As with the case of
automating love, this study will investigate how users’ interactive experience of love may be
algorithmically transformed to become technological in nature. The design of Tinder’s swipe
algorithm, for instance, tends to treat people as objects rather than as persons who are deserving
of interaction. Admittedly, such an algorithm can make dating more playful, relaxing and
experimental. But skepticism is still warranted, in that the process of matching people is similar to
browsing a shopping catalog, where potential daters are treated as a product and considered
comparatively against others (Haywood 2018, 145-146). This loss of meaning can also be found
in another case, that of China’s Social Credit System. For instance, some pilot projects of the

14
System have implemented policies to incorporate “voluntary blood donation into the social credit
system” (Toledo 2019).20 The existence of these policies has the potential to lead people to donate
their blood voluntarily, not for the purpose of helping patients who need blood, but rather as a way
of improving their own Social Credit Scores. Such an algorithmic scoring system encourages a
less meaningful culture, where people are only concerned with how to bolster own scores, without
concern for pro-social values like hospitality, compassion, and social responsibility.
In summary, algorithms can reduce our interactive experience of the world and our
meaningful social interactions to become only codes and instances of technical exchange. People
need to regard each other as ‘things’ rather than humans that they can meaningfully interact with.
This objectification can result in a loss of meaning in our cultural world.

The Colonial Power of Algorithms

The algorithmic lifeworld is not a natural formation, but rather a social system that relies on the
continuing appropriation, exploitation and manipulation of human life. Historical colonialism
depends heavily on a highly unequal power structure: the appropriation and exploitation of
resources “all for the enrichment of a few” (Couldry & Mejias 2019, 339). Similarly, algorithmic
colonization also relies on a solidly asymmetrical power structure, which supports an unceasing
appropriation and dispossession of “everyday life in ways previously impossible” (2018, 1).
As Thatcher et al. (2016) point out, the quantification of society rests on the continual
extraction of data from social life via an accumulation by dispossession. For example, when
application users click ‘yes’ to consent to some tacit data license agreement, they immediately
allow companies to ‘control those data’ (Thatcher et al. 2016, 994). Lanier reminds us that the
“reason people click ‘yes’ is not that they understand what they’re doing, but that it is the only
viable option other than boycotting a company in general, which is getting harder to do” (Lanier
2014, 314). The extracted data about human life is “not in the hands of those who generated it, but
of those who created the application” (Thatcher et al. 2016, 996). While this extracted data can be
used by companies to improve their services and sharpen their products, some companies use the
data exploitatively to reap surplus value (Zuboff 2019).

20
See <http://www.bjreview.com/Opinion/201912/t20191209_800187137.html>

15
This new capitalist logic is explained in detail by Zuboff in her Surveillance Capitalism
(2019). Surveillance capitalism refers to “the unilateral claiming of private human experience as
free raw material for translation into behavioral data” (Zuboff 2019, 8). This data is partly used to
maintain and upgrade products, while the rest is seen as “behavioral surplus” that is sold on a new
marketplace (ibid.). This data can prove useful to corporations and governments who want to
understand and shape the behaviors of individuals. For example, dating applications keep track of
every detail of a user’s romantic life mediated by their platforms. With the help of algorithms,
these romantic human experiences are analyzed and translated into behavioral data. Some of these
data feed their own algorithmic profiling and recommendations to improve the company’s services.
But other sorts of gathered data are used to make other prediction products (e.g., personalized
advertising), which are then sold to advertisers.
For Habermas, communicative discourse between equals is a prerequisite for choosing
freely, and thus crucial for democratic society. Our political engagement necessarily relies on an
equal communicative infrastructure so as to “comprehend our world, to interpret and respond to
claims made by others, and to formulate and articulate claims of our own” (Gilbert 2018, 92). But
in the algorithmic lifeworld, it is the algorithms that select and decide what we know, see and
experience. In this algorithmic processing, some companies have now been in a position of power,
and can create public knowledge and influence our political understanding of the world. Our
political participation is thereby subject to a form of commercial logic, which is more concerned
with making profits than offering public services.
At first blush, one might assume that our political world is often influenced by media
conglomerates like Murdoch, Springer, and so on. But algorithms do exert influence over our
political sphere in a particular way. Website algorithms learn personal preferences and silently
recommend information based on users’ past search history. In this way, algorithms can potentially
construct a new but compartmentalized environment where people tend to only see through their
own personalized world of knowledge. Algorithms thus create what Eli Pariser (2010) calls a ‘filter
bubble’ where users are gradually trapped in their own ideological bubbles, unable to learn about
or experience different viewpoints. What’s worse, the inner workings of those algorithms are often
hidden and obscure to the public, which makes it difficult for any democratic scrutiny to occur.
That means that algorithmic power can silently make users less open-minded towards views that

16
are different from their own, while users themselves may not realize the hidden influence of
algorithms. As a result, civic discourse as a space for open discussion can be negatively affected.
This asymmetry of power may erode democracy, which in turn leads to a legitimation
crisis. Habermas reminds us that a legitimation crisis can take place if a rigid power asymmetry
cannot be effectively challenged by democratic participation, a situation where people would not
accept the legitimacy of the existing political structures. A typical example can be found in the
rising social movements of the 1960s and 1970s, when a new generation of people had a greater
appreciation of social justice and civic rights, rather than narrow-mindedly pursuing their own
private economic interests (staying at home and enjoying consumer goods, etc.; Iser 2017; Gilbert
2018). The legitimation crisis happened because governments failed “to appease these growing
voices of discontent while remaining faithful to the maintenance of the market economy and
political administration” (Gilbert 2018, 93).
In the algorithmic society, it is very difficult for the exploitative and manipulative power
structure to be challenged by the public. Algorithms are increasingly used to determine the access
to, and the distribution of social benefits. But most of the time, we can hardly understand how
algorithms even work. Despite increasing calls for transparent algorithms, most companies are
very reluctant to comply, because they want to protect a competitive advantage and don’t want
people to learn how to ‘game’ their algorithms. The inner workings of their algorithms are also
legally protected as private intellectual property, so algorithms are seen as industrial secrets that
cannot easily be checked by the public. As a result, algorithms have become what Frank Pasquale
(2015) calls a ‘black box’, and are as a result hardly challenged by democratic scrutiny.
As has been demonstrated, data and algorithms are not only used to accumulate capital by
analyzing the knowledge of individuals but are also used to directly shape and manipulate
individuals’ behavior and political attitudes to reach their particular goals. This means of
behavioral modification and manipulation undermines an individual’s autonomy and human
agency, letting people “have little capacity for the moral judgement and critical thinking necessary
for a democratic society” (Zuboff & Laidler 2019).21 In other words, people’s willingness for
democratic participation – to challenge the colonial power of algorithms – may be curtailed.

21
See <https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-
undermining-democracy/>

17
Besides the colonized lifeworld, therefore, algorithmic colonization also means that
companies and governments can create algorithms to sustain their unrelenting colonial power to
exploit and manipulate crucial parts of an individual’s life. Such an unequal power structure is
often hard to be meaningfully challenged by democratic participation, and thus may lead to a
legitimation crisis in our current society.

The Colonized Subject

Algorithms not only shape the environment in which we connect with others and the power
structure, but also change how subjects see themselves. In an algorithmic lifeworld, the colonized
subject is more complex than a passive and reductionistic one.
In the digital society, a common understanding of subject is achieved by drawing a clear-
cut line between the data individual and the actual one. The main point is that “it is no longer actual
persons and their bodies that matter or that need to be subjected or disciplined, but rather the
individual’s representations” (Galic et al. 2016, 20). Such a representation (or data individual) is
what Haggerty and Ericson call a “data double”, a de-corporealized ‘double’ that is the result of
surveillance assemblage (Haggerty & Ericson 2000, 611). This data double can be roughly
understood as one’s data profile (that is, the information and knowledge) collected by institutions.
By analyzing the data double, an institution can acquire some new knowledge about actual people,
and then determine whether they are to be assigned some sort of social benefits based on behavior.
For example, by calculating people’s credit profiles, banks know the risks associated with potential
credit consumers, and can decide the interest rate assigned to them in mortgage applications.
Drawing a line between actual individuals and their data doubles can help us understand
some important features of colonized subjects. Bauman expresses the concern that “the piecemeal
data double tends to be trusted more than the person, who prefers to tell their own tale” (Bauman
& Lyon 2013, 8). This point shows that in an algorithmic lifeworld, a person’s identity, as
abstracted from data analysis, is more important than real human beings. As was considered in the
case of LaShawn’s story that opened this chapter, what her dater cares about is only an abstract
credit score rather than a real and complicated human being.

18
Another worrying facet of such abstraction of individuals is that a real individual can be
separated from the consequences of an action (Bauman & Lyon 2013, 8-9). The datafication of
society has pushed such technology-based abstraction even further. Our everyday lives are
constantly monitored and decided upon by algorithms, where we are treated solely as numbers,
waiting to be manipulated for profits or control. Admittedly, it is true that the analysis of the data
double and the knowledge produced from that data analysis can bypass human beings. But it is
actual individuals, not data doubles, who are the objects of decisions by institutions, regarding the
distribution of rewards or punishment. As Couldry and Mejias point out, it is “a real person who
gets offered a favorable price in the supermarket, an opportunity for social housing, or a legal
penalty, all based on algorithmic reasoning” (2019, 344). Only focusing on the data double and
forgetting the experience of real human beings can contribute to the normalization of an
objectifying logic that overly punishes those people’s behavior considered ‘deviant’. For instance,
China’s Social Credit System, the case that I will study later on in this thesis, is designed in a harsh
manner, where the so-called ‘untrustworthy’ citizens with low scores can be immediately
sanctioned and face long-lasting and significant punishments.
However, all those depictions are only part of the story about the colonized subject. By
cutting the line between individual and data double, subjects are defined and reduced to passive
ones who are simply waiting to be evaluated and sorted by institutions. This overt focus on the
data double has displaced the significant participatory experience of the real individual. David
Lyon (2018, 824) shows that subjects can actively and voluntarily live with surveillance in
everyday life. Albrechtslund (2008) coined the term ‘participatory surveillance’ to describe
precisely this, and to understand how citizens and users perform voluntarily within the roles of
watching and being watched. Boyd and Ellison (2007, 16) also find that, for social media users,
sharing privacy is a type of participation into these surveillance-based platforms.
All of these theories highlight a participatory subjectivity situated within surveillance,
where subjects are active and creative. Because of their creativity, subjects of surveillance can
adapt to, resist, and even make use of surveillance (Cohen 2012, 137). For colonized subjects, I
suggest that we should not dismiss the participatory aspect of the subject. With regard to the data
double, the focus is on the collection and analysis of individuals’ data, which is then used for social
sorting and evaluation (Haggerty & Ericson 2000, 605). It is, however, noteworthy that people
who realize they have been algorithmically classified may be self-disciplined enough to change

19
their behaviors – that is, to become identified with a preferred category, or to rid themselves of an
unfavored classification (Manokha 2018, 221). In the setting of credit scores, for instance, if people
know that a credit score is a significant social determinant, and that prompt repayment may
improve their scores, they may be self-disciplined enough to adapt their behaviors to thus improve
their scores. So, the discipline of individuals and the social-sorting of the data double are not
clearly separated, but rather often complementary to each other, considering the participatory
nature of some subjects.
For such subjects to be ‘participatory’ does not mean that they are free from disciplinary
power, but rather that their behavior is often predefined by algorithmic systems. Take, for instance,
Tinder’s swipe algorithm, which is intentionally designed in a game-like manner to motivate the
subject’s participation. However, such algorithm-powered gamification does not mean that
subjects can be whatever they want to be. In such an algorithmically constructed dating game, it
is the algorithms that decide which profiles are more visible to participants. So, to win such dating
‘games’ (by finding a suitable match), users have to adapt themselves to a system of norms that
are already prescribed by the dating algorithms themselves. Participants have to learn how to make
the most use of the algorithms, knowing when to start swiping, how to select the best profile photos,
and even what types of pets they should pose within photographs.
As a result, we are witnessing a new self-representation created in the algorithmic
lifeworld. This new identity may not be derived from one’s own self-determination. Instead, it is
the algorithms that drive users to participate more actively in a prescribed normalization process.

Outline of the Thesis

The overall aim of my thesis is to develop a critical theory approach, in order to understand how
algorithms are unceasingly expanding into our lifeworld, and what ethical challenges now present
themselves due to that expansion. The central argument is that the continual expansion of
algorithms into our social realm can destroy the potential of communicative actions for open-
minded and respectful interactions as well as the exploration of social justice, equality, and other
social values. This expansion instead introduces a mindset of objectification into social relations
and the self, in which love and trust that are tied to free and participatory interactions, are only

20
reduced to a purely cognitive process or a passive calculation of data. In conclusion, algorithmic
imperatives follow economic rationalities, which may be useful in the economic sphere, but can
be inadequate and destructive in communicative relations in the lifeworld.
The thesis is mainly divided into two parts, consisting of a total of four chapters. Part I
(comprising the first two chapters) proceeds at a theoretical level. It establishes a theoretical
framework for algorithmic colonization, based on reformulating Habermas’s colonization thesis,
the discussion of which is supported and enriched by the critical theory of technology and
surveillance studies. Part II (comprising the third and fourth chapters) is more concerned with real-
world examples and features two case studies that are used to elucidate my viewpoints. This second
part will focus on two algorithmic systems: one is a sort of ‘smart’ matchmaking system, and the
other is China’s Social Credit System. In so doing, I will explore how algorithms are shaping our
most intimate experiences of love and changing the crucial idea of trust in democratic societies.
Chapter 1 develops a critical theory framework for understanding algorithmic colonization.
It builds on a reformulation of Habermas’s colonization thesis, incorporating a critical theory of
technology and Foucault’s disciplinary surveillance. It starts with an introduction of Habermas’s
colonization thesis, depicting how the system develops its inherent logics, and how such a
somewhat autonomous system can expand beyond its intended boundaries as a system, and intrude
into the world of human interactions. It then examines some typical critiques of Habermas’s
modernization theory and especially his romanticization of the lifeworld – which assumes an
idealized historical prior. As a result, I reformulate his colonization thesis, in an effort to
understand the lifeworld not in a historical, but in a normative manner. The lifeworld is thus seen
as a potential for open interactions and exploration. I also develop the colonization thesis as a
particular critique of technological rationality in our modern society. In this light, the colonization
of the lifeworld is problematic because technological rationality and some specific surveillance
technologies undermine such a potential for open-minded and respectful interactions.
Chapter 2 attempts to apply the reformulated colonization framework to the algorithmic
society, to build what I call the theory of algorithmic colonization. As will be shown, the
colonization thesis needs to incorporate Foucault’s ideas about disciplinary surveillance. Chapter
2 will, however, show that Foucault’s disciplinary theory itself has been challenged in the digital
society. I will examine three typical arguments against Foucault’s disciplinary surveillance: the
argument of fluid institutions, the argument of the data double, and the argument of risk-based

21
control. These arguments, however, are not wholly convincing. I therefore offer three responses to
show that Foucault’s disciplinary theory can be adapted to a digital world, even if we need to
change it to a different form. I use a more flexible notion of ‘algorithmic discipline’, rather than
Foucault’s idea of the ‘panopticon’, to describe in detail how disciplinary power functions in an
algorithmic society. In the end, this will show that such an algorithmic discipline is less
problematic in itself if it restricts its power only to the economic field. The chapter hints that the
disciplinary power of algorithms can be used to hinder and distort human social interactions, which
can result in the significant problem of ‘algorithmic colonization’.
Chapter 3 will investigate how algorithmic discipline shapes the most intimate of human
relations – love – in the algorithmic society. I will argue that such algorithmic colonization of love
can restrain people’s capacity to explore the specific meaning of intimacy through open-minded
and respectful interaction with others in the cultural, social and personal domains. As will be
shown, AI algorithms not only match potential daters, but also automatically decide who is the
best fit. Such algorithms often reduce our complex romantic experiences and attractions to fixed
and static attributes, which potentially relegate the mutually interactive quality of love to an
automated ‘unilateral execution’. These automated love relations, which do not need any mutual
interactions, dialogue or negotiation, have the potential to promote a culture of objectification. I
also carefully analyze the business model behind those dating algorithms, which can potentially
exploit people’s intimate data and manipulate human interactions. The possible exploitation and
manipulation of this data can distort the interactive infrastructure utilized in establishing intimate
relations with others. With regard to personal identity, this chapter describes how dating algorithms
tend to match people based on similarities, which may trap self-expression and interactions with
others in a filter bubble.
Chapter 4 turns to a broader social context, and focuses on human trust relations, which
are also an essential element in the lifeworld. I describe the new phenomenon of ‘automated trust’,
which is not derived from mutual human interactions, but from automatic tracking, prediction, and
penalties ensured by algorithmic control. I draw on the framework of algorithmic colonization to
see this algorithmic trust as a ‘colonization’, where algorithms undermine the interactive
infrastructure of trust relations and restrain the ability to mutually interact with others while
developing trust relations. By making a case study of China’s Social Credit System (hereafter SCS),
I examine how its credit scoring algorithm tends to detach trust from real individuals, which can

22
contribute to a culture of objectification and excessive punishment. I also argue that what the SCS
establishes is not ‘trust’ or ‘sincerity’ as it claims, but rather no more than a kind of social
compliance ensured by algorithmic control. However, Chinese authorities deliberately re-moralize
such algorithmic trust through nationwide trust propaganda campaigns, which can hide the power
operation of algorithmic control. I show that such ideological effects have the potential to
negatively shape the algorithmic imaginary, and how people identify themselves in the SCS. I
explore alternative narratives of trust to challenge the official, dogmatic narrative of trust as an
absolute moral good, in order to make trust relations more interactive and open-minded.
In the Conclusion, I discuss the fundamental and self-reflective question: How is it possible
to critically study algorithmic colonization in an era when almost every part of our lives has been
influenced by algorithms? I do not argue for a compartmentalist approach that states that a
lifeworld that should be walled off from the influence of algorithmic power. This is impractical,
since our everyday lives have often been directly or indirectly tracked by companies and states in
the Big Data society. This compartmentalist idea also fails to take notice of the fact that algorithms
can be very beneficial in a number of areas. Therefore, what I argue in this thesis is a general
principle: algorithmic systems should be designed and deployed in an open and flexible way that
allows people, as much as possible, to freely and respectfully interact with others in love, trust or
other crucial relations. Otherwise, the algorithmic society will become less interactive, as well as
more objectified, which will be harmful to democratic societies and the flourishing of human
beings.

23
24
Part I: Theoretical Background: Colonization Thesis and Surveillance Studies

25
26
Chapter 1: Colonization of the Lifeworld: A Critical Theory Approach22

Nowadays, automated algorithms have become pervasive and consequential in determining and
shaping our social lives, from loan approvals and job applications, to legal decisions and college
admissions; from political expressions in the public sphere, to expressions of intimate relations
and subjective emotions (Citron & Pasquale 2014; Broussard 2020; Calo & Citron 2021).
Algorithms can appear to have imperative force, and these imperatives, following economic
rationality, might make our lives more efficient and useful in an economic sense. However, these
imperatives can be inadequate and destructive when they are applied to some social realms that
depend on meaningful social interactions – this is the challenge I want to tackle. To explore this
matter more deeply, in the following, I will draw on Habermas’s ‘colonization of the lifeworld’
thesis to build a critical theory framework.
I will not share Habermas’s genealogy of modern societies and the idealization and
romanticization of a bygone lifeworld. His historical narrative of the lifeworld has been criticized
as painting an inaccurate picture of the family and relationships more generally, and
underestimating patriarchal power dynamics, and so on (Fraser 1985; Loick 2014). So, my own
study tries to reformulate a new colonization thesis that understands the lifeworld in a more
minimalist social-theoretical as well as normative way. Colonization as I see it is problematic not
because of a loss of a historical past, but a loss of potential for communicative actions and open-
minded, respectful and equal interactions. I furtherly suggest that the discussion of the colonization
thesis should be supported by a critical theory of technology and surveillance studies in our
algorithmic society.
This opening chapter is structured in four sections. It starts with an introduction to
Habermas’s colonization thesis, presenting and explaining the difference between the system and
the lifeworld, and how the development of system may have the potential to colonize the lifeworld
in an uncontrolled manner. The second section critically examines the limits of Habermas’s

22
This chapter is adapted from my articles: Technological Colonization: Rediscovering Habermas’s Critical Theory
of Technology (In preparation). Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power
of Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25. https://doi.org/10.1007/s13347-022-00564-w;
Wang, H. (2022). Algorithmic Colonization of Love: The Ethical Challenges of Dating App Algorithms in the Age of
AI. Techné: Research in Philosophy and Technology. [Accepted]

27
historical and romanticizing narrative of the lifeworld and proposes to reclaim a new colonization
thesis as an immanent critique. The third section investigates a common misinterpretation
regarding Habermas’s idea of technology as non-social, and argues why it is important to
rediscover his critical theory of technology. Finally, I will show that Habermas’s abstract critique
of technology should be enriched by critical surveillance studies, in order to better understand
algorithmic rationality in the era of Big Data.

1.1 Habermas’s Colonization Thesis

In The Theory of Communicative Action (II) (1987), Habermas famously introduces the
colonization thesis, so as to critically understand the pathological effects of modern society in the
mid-twentieth century.
For Habermas, the lifeworld is crucial to assure the symbolic reproduction of society,
maintaining and transmitting social norms to sustain social identities (Fraser 1985, 99). The
concept of the lifeworld is twofold. First, it provides a symbolic resource for communication.
Following Edmund Husserl, Habermas sees the lifeworld as a given resource of all background
knowledge in lived experience. This background knowledge is pivotal because it is what we share
with other people, and what we understand about each other in our lives. This shared knowledge
not only concerns some deep questions of value, such as our definitions of happiness and justice,
but also some less abstract norms such as “professional standards and codes of behavior”
(Fukuyama 1996, 26). Without this background knowledge, our everyday communication and
actions would not be possible.
The second part of the twofold notion of the lifeworld, which echoes Alfred Schutz (1973),
is that Habermas understands it as based on a linguistically communicative process. He highlights
that the lifeworld reproduces only through what he calls ‘communicative action’. This does not
only mean an action that needs linguistic interactions, but the particular action that is based on
mutual understanding among all participants. Communicative actions are motivated by
considerations of reaching mutual understanding, and people’s beliefs are developed
autonomously, rather than coerced by forms of violence, deception or manipulation.

28
In contrast, Habermas introduces the concept of the system to describe the sphere of
society’s material reproduction. For Habermas, the system is important in our society, and its
reproduction can also support the lifeworld materially. The lifeworld is not an abstract realm: in a
relation of love, for example, people date and interact with others not through an abstract, but
rather a materialized intimacy, in which they drink the red wine produced at a farm or send
postcards created in a factory. Just as Habermas points out, “[the] maintenance of the material
substratum of the lifeworld is a necessary condition for maintaining its symbolic structures” (1987,
151).
Unlike the lifeworld, however, the system reproduces itself only through ‘strategic actions’
– the particular interactions that are merely oriented toward success and utility, rather than mutual
understanding. Strategic actions are directed to reach some instrumental goals, even allowing
people to use outside forces like deception, coercion, punishment, manipulation, and so on. To be
sure, this distinction is not made to show that communicative actions demand disinterested actors,
since basically everyone acts in order to pursue some objective. But for communicative actions,
people achieve their goals only based on “an understanding of their situation, which is shared with
regard to the truth of the relevant facts, the normative rightness of social relations that come into
play, and the truthfulness of those involved” (Strecker 2009, 366). Habermas summarizes the
difference between communicative and strategic actions in his Moral Consciousness and
Communicative Action (1990):

I call interactions communicative when the participants coordinate their plans of action
consensually, with the agreement reached at any point being evaluated in terms of the
intersubjective recognition of validity claims. Whereas in strategic action one actor seeks
to influence the behavior of another by means of the threat of sanctions or the prospect of
gratification in order to cause the interaction to continue as the first actor desires [...] (1990,
58)

What is noteworthy is Habermas’s emphasis that when the system develops to a higher degree of
complexity, its systemic integration will become more formal and abstract, and be increasingly
steered by some ‘delinguistified media of communication’ (Habermas 1987, 154). For Habermas,
there are two forms of media: money and power. Mediated by money, people are promised that

29
things of similar value can be exchanged equally on the market, and thus they can simply buy
products without any need for deep connection or meaningful communication. When people are
mediated by power, they just follow the rules already defined by laws or regulations without any
further need to reach agreement or consensus (Feenberg 1996, 59). Both media form the systemic
mechanism that drives the system to develop more complex subsystems: the economic system
(monetization) and the state (bureaucratization; Jütten 2011, 703). The media of money and power
thereby “steer a social intercourse that has been largely disconnected from norms and values”
(Habermas 1987, 154). In this sense, the system obtains a sort of ‘autonomy’, or in Habermas’s
terms, ‘systemic integration of society becomes uncoupled’ from the lifeworld (Habermas 1987,
115). This uncoupling takes place as the system only follows its inherent logics of systemic
integration, with no need to be “evaluated and questioned through [the] lifeworld” (Heyman &
Pierson 2015, 3).
In Habermas’s eyes, such a nearly ‘autonomous’ system is not itself problematic, provided
it is only restricted to the realm of material reproduction. In this sense, Mattias Iser suggests that
from Habermas’s viewpoint of colonization, “[in] the sphere of material reproduction, ways of
thinking and acting oriented toward utility are not just appropriate but also necessary” (Iser 2009,
495). What Habermas really worries about is when such systems expand beyond their intended
boundaries to intrude into the lifeworld in an uncontrolled manner. When uncoupled from the
lifeworld, the system can develop an ‘internal dynamic’, and can intrude by projecting its systemic
rules into the lifeworld (Habermas 1987, 327). Intersubjective communications are originally
oriented to shared meanings and mutual understanding, but now they are gradually shaped and
distorted by the systemic imperatives of utility and efficiency. In the sphere of symbolic
reproduction, people do not need to “discuss and come to agreement on everything” anymore
(Gilbert 2018, 89). Instead, it is those non-discursive systems (markets and bureaucracies) that
speak and decide for us. As a result, the colonization of the lifeworld occurs:

In the end, systemic mechanisms suppress forms of social integration even in those areas
where a consensus-dependent coordination of action cannot be replaced, that is, where the
symbolic reproduction of the lifeworld is at stake. In these areas, the mediatization of the
lifeworld assumes the form of a colonization. (Habermas 1987, 196)

30
For Habermas, colonization only occurs when the communicative infrastructure of the lifeworld
is undermined. A typical case is the juridification of the welfare state, in which citizens are reduced
to passive clients for services offered by the state (Loick 2014, 762). In this example, not only do
people have to adapt their lives to a particular form organized and dictated by the state, but they
are “unable to develop a proper understanding of colonization phenomena on their own” (Strecker
2009, 377). This is related to Habermas’s idea of ‘fragmented consciousness’, which is used to
explain the problem that “everyday consciousness is robbed of its power to synthesize” and people
cannot view the lifeworld holistically when money and power impede the communicative
infrastructure (Habermas 1987, 355). Due to this fragmented consciousness, people cannot discern
and challenge the potential problems of colonization, which results in colonization spiraling out of
control.
This colonization can negatively influence the symbolic reproduction of the lifeworld and
cause pathological effects. According to Habermas, the lifeworld structurally comprises three
elements: culture, society, and personality. As such, the lifeworld reproduces itself through
communicative actions in three different processes: (a) the reproduction of culture or the
transmission of meaning; (b) the integration of society or “legitimately ordered interpersonal
relations”; and (c) socialization or the development of individuals with a strong sense of self (that
is, personal identity;1987, 141-144). Habermas goes on to explain:

I use the term culture for the stock of knowledge from which participants in communication
supply themselves with interpretations as they come to an understanding about something
in the world. I use the term society for the legitimate orders through which participants
regulate their memberships in social groups and thereby secure solidarity. By personality I
understand the competences that make a subject capable of speaking and acting, that put
him in a position to take part in processes of reaching understanding and thereby to assert
his own identity. (Habermas 1987, 138)

For Habermas, the social pathologies caused by colonization are complicated. In the domain of
culture, these social pathologies include the ‘loss of meaning’, ‘unsettling of collective identity’,
and ‘rupture of tradition’. In society, there are ‘withdrawal of legitimation’, ‘anomie’, and
‘withdrawal of motivation’. In the domain of person, there are the ‘crisis in orientation and

31
education’, ‘alienation’, and ‘psychopathologies’ (1987, 143). For convenience, we can generalize
those social pathologies into three main types. For cultural reproduction, colonization results in
the loss of meaning. For social integration, it leads to the legitimation crisis. When it comes to
socialization, colonization damages the ‘formation of identity’ (1987, 144).

1.2 The Colonization Thesis as an Immanent Critique

It should be realized that some key assumptions in Habermas’s modernization theory and its
associated colonization thesis are very much contested. 23 One such contested idea is that all
societies follow a trajectory from ‘primitive’ to ‘modern’, and that the lifeworld is free from power
and dominance (see Fraser 1985; McCarthy 1991; Cohen 1995; Loick 2014). Such an analysis of
colonization might lead to romanticizing a historical past. As such, the ideas presented here do not
depend on Habermas’s historical and romanticizing assumptions about the lifeworld. Nor do they
adopt a stance on the quality of the lifeworld prior to colonization. Instead, I follow only a
normative understanding of the lifeworld, which is also grounded in a social-theoretical analysis
of how contemporary societies work/reproduce themselves. In this light, colonization is described
not as a loss of an idealized lifeworld, but as a loss of potential for communicative actions.
Habermas’s colonization thesis assumes that there is a ‘rigid separation of the domains of’
the lifeworld and the system, and symbolic reproduction and material reproduction (Cohen 1995,
63). Furthermore, the colonization thesis presumes that such a distinction is a historical process.
Habermas sees the evolution of society as a rationalization of the lifeworld as well as a growing
complexity of the system, where all societies are developed or evolved from primitive to modern.
In his historical narrative, pre-modern societies are ruled by social norms shared in tribal
communities, where people’s behaviors are restrained by cultural and ideological norms. These
societies are generally “governed by binding consensual norms”, which are grounded in the
“unquestionable underpinning of legitimation constituted by mythical, religious or metaphysical
interpretations of reality as a whole” (Habermas 1965, 92, 95). In modern society, communicative
rationality has gradually developed, and social norms are not acquired as undoubtable, certain facts.

23
My thanks to Beate Roessler, Daniel Loick, and Robin Celikates, who reminded me of some big problems inherent
in Habermas’s notion of modernity and the lifeworld.

32
Instead, they are subject to critical reflection, in which existing ethical norms would be reviewed
through (for instance) the lens of justice. Nevertheless, with the rationalization of the lifeworld,
the systemic coordination also grows rapidly in modern society. When the capitalist mode of
production emerges, the system begins to not only legitimate itself through high efficiency and
utility, but also extend its boundary to colonize the lifeworld.
Such a historical description of modernization is problematic, not only because the
underlying assumption that all societies develop from pre-modern to modern can be associated
with a kind of ‘Western-centrism’ (Rosenberg 2006, 77; Yang 2022, 97), but also because the
historical process can romanticize a historical prior. Habermas’s genealogical analysis of the
colonization thesis assumes that there was a lifeworld free from the power and always oriented
towards consensus, and then the systemic imperatives colonized it. However, many have criticized
that historically there is no such thing as a lifeworld that is not influenced by power (Fraser 1989,
106; Loick 2014, 759). Typical of such criticism is that provided by Nancy Fraser (1989, 99), who
examines Habermas’s colonization thesis from a feminist perspective.
As Fraser argues, it is incorrect for Habermas to see family only as a private realm that is
constitutive of the lifeworld for symbolic reproduction. In Habermas’s framework, paid labor is
seen as material reproduction in a system mediated by money, while ‘women’s unpaid childrearing
work’ is regarded as symbolic reproduction oriented towards caring and socialization (Fraser 1985,
100). Fraser suggests that such a binary distinction is not only ‘conceptually inadequate’ but also
‘potentially ideological’ (1985, 100).
Fraser points out that childrearing is not only about symbolic reproduction, but also about
material reproduction. Raising children requires the cultivation of their social identities through
interactions, language teaching and learning, but it also requires them to be a part of “the biological
survival of the societies” by interacting with physical nature (Fraser 1985, 101). Fraser goes on to
debunk the distinction of symbolic reproduction and material reproduction as an ideology that
conceals patriarchal domination. As shown, the unpaid work of raising a child is not naturally
distinguished from paid social labor, but socially constructed by male dominance in societies. By
drawing a line between women’s unpaid childrearing work and paid labor in the economic market,
the unpaid status of childrearing is justified, which is “a mainstay of modern forms of women’s
subordination” (1985, 102).

33
Furthermore, Fraser shows that Habermas’s binary distinction can also wrongly justify
women be confined to the family, which is seen as a personal private sphere (ibid.). For Habermas,
the family is understood as a place for symbolic interaction and is therefore free of power structures.
But Fraser rejects this notion by supplying some empirical evidence that shows that the family has
never been power-free in capitalist societies. Instead, it is filled with “individual, self-interested,
strategic calculations” which “have a strategic, economic dimension” (Fraser 1985, 105). Families
are not only influenced by economic logics, but they themselves are de facto economic systems
which involve the appropriation and exploitation of women’s unpaid labor and usually through
“coercion and violence” (1985, 107, 109). Because of this, she thinks it is “a grave mistake to
restrict the use of the term ‘power’ to bureaucratic contexts” (1985, 109). In contrast to Habermas’s
romanticizing narrative, Fraser contends that the family is a site that is dominated by patriarchal
power which serves to “enforce women’s subordination in the domestic sphere” (ibid.). In this
sense, as Daniel Loick suggests, the juridification (a form of colonization) of the lifeworld is
actually “the implementation of legal protection for their members”, so the colonization can be
seen as an (ambivalent) form of liberation (2014, 757).
These criticisms of Habermas’s historical understanding of the colonization thesis are
warranted. In fact, as Habermas himself admits in his later work, the distinction between the two
spheres (the lifeworld and the system) should be better seen as an analytic one that points out
different features of social coordination: “I initially treat social and system integration as two
aspects of societal integration which must be considered analytically distinct” (Habermas 1991,
219, emphasis in original). As a result of these considerations, my reformulation of the
colonization thesis is not dependent on Habermas’s theory of modernity and the historical narrative
of the lifeworld. Instead, I see the lifeworld only as a normative background for open-minded and
respectful interactions that are crucial for our social life, which is increasingly threatened by
systemic imperatives in reality.
The concept of the lifeworld and the associated colonization thesis are thus used as the
standard for an immanent critique, to critically examine existing problems in societies. Immanent
critique is a kind of social critique, where standards or principles derived from the society in
question are used to show how reality contradicts its own principles (Stahl 2013, 534; Jaeggi 2018,
190-191). In the words of Axel Honneth, “only those principles or ideals which have already taken
some form in the present social order can serve as a valid basis for social critique” (Honneth 2001,

34
6). In utilizing immanent critique, my thesis does not argue that an idealized lifeworld exists, or
has ever existed, at some point in time (as, for instance, primitive societies) or in some realms
(such as family, school, and love relations). Instead, I suggest only that it should exist. The
colonization of the lifeworld is problematic not because of a loss of the lifeworld, but because of
the loss of a normative potential for communicative actions, and an open exploration of
possibilities (like the critique of power and the pursuit of social justice). This immanent critique
of the lifeworld considers the ideal principles of communicative actions that are constituted in the
lifeworld, and then compares those proposed ideals to reality. If there is a gap between ideals and
reality, that means “the reality contradicts its own ideals”, and therefore, “reality needs to be
changed in order to overcome this incongruity” (Fuchs 2020, 207).
Although we do not have to share Habermas’ historical narrative, we can still benefit from
his understanding of the lifeworld as a dialogical, dynamic and transformative process. Unlike
traditional Marxists, Habermas does not believe that the lifeworld has already been totally
dominated by instrumental rules. As he often emphasizes, the communicative rationality of the
lifeworld can be the way to resist the total instrumentalization of society:

Horkheimer and Adorno failed to recognize the communicative rationality of the lifeworld
that had to develop out of the rationalization of worldviews before there could be any
development of formally organized domains of action at all. It is only this communicative
rationality, reflected in the self-understanding of modernity, that gives an inner logic – and
not merely the impotent rage of nature in revolt – to resistance against the colonization of
the lifeworld by the inner dynamics of autonomous systems. (Habermas 1987, 333,
emphasis added)

Unlike economic or administering systems, for Habermas, the lifeworld is active and participatory,
and crucial for nurturing people’s communicative rationality, since it provides the necessary
sources to develop their reflective capacity. With this capacity, people can discern the expansion
of systemic logic and even “reverse the process of colonization” (Ortega-Esquembre 2020, 8).
Thus, the lifeworld is not an independent sphere that only provides sources or background
knowledge for communicative actions. Rather, communicative actions and open interactions will
not only reproduce but also renew the conditions and sources of the lifeworld itself. The lifeworld

35
is therefore normatively significant, not because it prescribes some independent and idealized
principles, but because it provides a potential for communicative actions which are basically about
open interactions and explorations.24 In this manner, the lifeworld is not seen as a space for some
set of independent and static ideals, but rather as a sphere for open-minded and respectful
interactions.
If we understand the concept in this way, the colonization of the lifeworld should be seen
as not dealing with the loss of an idealized lifeworld, but rather with systemic imperatives that
undermine the communicative infrastructure for people’s open interactions and explorations. In
the real world, there is always a tension between the ‘colonial’ dynamic of the system and the
constant resistance of the lifeworld. Such expansion of systems to the lifeworld is not completed
once and for all, as the process of expansion can be resisted and restrained by the communicative
rationality of the lifeworld. What the colonization thesis really concerns is when such a process of
systematic expansion continually undermines communicative infrastructure, and the critical ability
of humans to such an extent that the expansion of systemic imperatives becomes out of control.
The colonization of the lifeworld really happens when it restrains a potential for communicative
action and impedes the possibilities of open exploration. In this situation, the process of interaction
will be curtailed, and people will not be able to question those beliefs prescribed by the market
and bureaucratic power. This process leads to an uncontrolled or, in Habermas’s words,
‘irresistible’, expansion of systemic logic into the lifeworld. In his own words:

It is only with this that the conditions for a colonization of the lifeworld are met. When
stripped of their ideological veils, the imperatives of autonomous subsystems make their
way into the lifeworld from the outside – like colonial masters coming into a tribal society
– and force a process of assimilation upon it (1987, 355).

In this study, I reformulate Habermas’s colonization thesis as an immanent critique rather than a
historical and genealogical narrative. With this approach, I understand the lifeworld only as a

24
Admittedly, Habermas builds his notion of communicative actions on reasons and rationality which are oriented
towards consensus. This overly rational mode of communicative actions may restrain a more pluralistic exploration
of the world (Mouffe 2013). In my thesis, I will not adopt Habermas’s consensus-based communicative actions, but
rather generally understand it as a kind of non-manipulated interaction and people’s open exploration.

36
normative framework, while the colonization of the lifeworld is problematic because the
uncontrolled expansion of systemic logic into the lifeworld undermines the potential for open
interactions and explorations by human beings.
As clarified in the Introduction, such open explorations are not arbitrary; rather, they are
the sorts of communicative actions oriented towards mutual understanding, social justice, equality,
and other social values. People become involved in communicative actions not for purely linguistic
reasons or to share gossip, but more importantly to engage in a process of dialogue, debate, and
negotiation in order to reach a shared understanding of what is good, what justice is, and the
meaning of equality for the society they live in.

1.3 The Colonization Thesis as a Critique of Technology

Some scholars have criticized the colonization thesis on the grounds that Habermas’s binary
distinction of the system and the lifeworld may lead to an omission of a critique of technology.
One of the most influential commentaries on this omission is given by Andrew Feenberg, who
suggests that Habermas only “sees technology as neutral”, which then “obscures the social
dimension on which a critical technique can be proposed” (Feenberg 1996, 49).
Feenberg complains that “the term (technology) is not even included in the index of the
Theory of Communicative Action” (1997, 49). For Feenberg, Habermas’s omission of a critique of
technology results from his view that technology is confined to the system, which is, as defined
by Habermas, rigidly distinguished from communicative rationality. For Habermas,
communicative actions are about social interactions and social practices. In contrast, according to
Feenberg, Habermas sees technology as ‘non-social and neutral’ (1996, 53). In this light,
technology, in the eyes of Habermas, is only about “the technical control of nature” (Delanty &
Harris 2021, 6). For example, keeping an airplane aloft is dependent on scientific and natural laws
in mathematics and aerodynamics; it does not fly due to social interactions or some set of religious
beliefs (Regis 2020).25 This technical control deals with the objective world or nature, rather than
assuring humans’ meaningful understanding.

25
Retrieved from <https://www.scientificamerican.com/article/no-one-can-explain-why-planes-stay-in-the-air/>

37
In this interpretation, Habermas’s notion of technology is very outdated, if not just wrong.
Nowadays, few people would accept the argument that technology is neutral. According to that
argument, a gun can be used to rob someone, or it can be used to protect one’s property, but the
gun itself is neither good nor bad. If a gun is used to kill an innocent person, it is the murderer and
not the gun that should be morally or politically blamed. A technological artifact is thereby only
seen as a neutral instrument. However, such an idea of technological neutralism has been heavily
criticized (Winner 1980; Mitcham 1994; Feenberg 2012; Verbeek 2011). One need only think of
Facebook, Twitter, Google, Netflix, and other recent digital technologies that have deeply
influenced our social and political lives. The rise of social media also shows that technology can
play a role of communication. Hence, technical artifacts are often embedded with various values,
constructed by a network of social, economic and political power (Winner 1980; Bijker 1997). Just
as David Stump points out, “Habermas’s distinctions between system and lifeworld; technical and
communicative rationality, etc. all seem to break down if technology and technological systems
are socially constructed” (2000, 11).
Nevertheless, I argue that Feenberg’s critique, as well as other similar arguments, may be
founded on a misunderstanding of Habermas’s colonization thesis and his critique of technology.
Admittedly, Habermas’s colonization thesis is often interpreted as being based on a rigid
dichotomy between the lifeworld and the system, which seems to suggest that there is a clear-cut
boundary between the two. On the one side of this boundary, a normative background is assumed,
where the existence of a non-instrumental regime is seen as crucial, namely the lifeworld which is
composed of social interactions oriented towards mutual understanding. On the other side, the
dichotomy presumes an instrumental sphere which is dominated by instrumental rationality
oriented towards success and utility. But even such a binary oppositional relationship between the
system and the lifeworld does not mean that there is a clear-cut boundary in real life.
As I argued in the previous section, the distinction between the two spheres should be seen
as an analytic distinction. Habermas often stresses that even for the de-linguistified media like
money and power, it still necessarily involves social interactions in order to be institutionalized
into economic and bureaucratic subsystems. As mentioned, Habermas points out that the
“maintenance of the material substratum of the lifeworld is a necessary condition for maintaining
its symbolic structures” (Habermas 1987, 151). The system and the lifeworld are not separate from
each other in the real world, and the distinction should not oversimplify the integrating dynamics

38
of social institutions (McCarthy 1991, 152). In reality, the two realms are often in tension with
each other, yet at the same time they can support each other in social and systemic coordination.26
It was mentioned earlier that even if Habermas’s historical narrative of the lifeworld is
problematic, his understanding of the lifeworld as a dynamic and transformative process is
insightful. He emphasizes that communicative rationality in the lifeworld can play a transformative
role in resisting colonization, and even serve to renew the lifeworld. This resistance is not as simple
as merely kicking the system out of the lifeworld. Instead, it involves concrete democratic actions
that will intervene in the system and reshape it. In this sense, we can see the potential of democratic
power in the lifeworld, the fundamental idea of which is not basically different from Feenberg’s
idea of ‘democraticizing technology’ (Feenberg 2001, 177).
It is my contention that, if carefully reformulated, Habermas’s colonization thesis can
establish a critique of technology that is more than simply drawing a clear-cut line between the
system and the lifeworld. This critique of technology can be built from ideas in his early article
titled ‘The ideology of technology and science’ (Habermas 1970). It is true that Habermas rarely
discusses ‘technology’ in his major works, but the 1970 paper is an important exception.27 In that
article, Habermas considers Herbert Marcuse’s critique of technology, and more precisely, his
critique of technological rationality, and agrees with much of what Marcuse says. Marcuse and
Habermas do not hold significantly differing views on technology – at the very least, not as distinct
as many scholars seem to think (e.g., Delanty & Harris 2021). To explain this matter further, it
will prove helpful to examine some of Marcuse’s observations regarding technological rationality.
Marcuse points out the widespread technological rationalization process in modern society,
where technological systems legitimate themselves by efficiency and convenience, without critical
human participation. Certainly, humans are needed in technological rationalization, but their

26
For Habermas, the system is not necessarily de-linguistified and non-social, although the system is steered by de-
linguistified media. Habermas conceives society “simultaneously as a system and as a lifeworld” (1987, 118). So, the
detachment of the system from the lifeworld never means that the system becomes purely objective, non-social, or de-
linguistified. Instead, as Habermas emphasizes, “systemic mechanisms need to be anchored in the lifeworld: they have
to be institutionalized” (1987, 154).
27
Another exception is his published book The Future of Human Nature (2003), in which Habermas examines some
fundamental ethical issues caused by a specific technology – biotechnology and genetic manipulation. However, this
book says little about information technology and therefore does not seem to be very helpful in answering the questions
posed in this thesis.

39
participation is only one-dimensional: they are simply ready to comply with whatever the
technology requires. In One-dimensional Man (1964), Marcuse argues that there is a loss of critical
thinking in advanced industrial society. People still have their ‘inner freedom’ to think in a two-
dimensional manner, in which individuals can recognize the domination of the society and demand
social change (12). But this critical thinking has been caused to collapse by technological
rationality. Technological production offers a much ‘better’ way of life, where people can enjoy
the various goods and services available to them (Marcuse 1964, 14). But the products are always
imbued with ‘prescribed attitudes and habits’, which attempt to meet and satisfy consumer desire
(1964, 14). When people use these technological devices, they are immediately and automatically
following the norms predefined by those devices. 28 In so doing, technological rationality can
legitimate itself by its “highest expediency, convenience and efficiency” (Marcuse 1941, 145).
People are gradually getting used to that one-dimensional technological rationality, while the
original critical rationality has become lost.
Habermas agrees with Marcuse that technological rationality can reduce our lives to
technological exchange, and also features an ideological function that undermines people’s critical
capacity. In ‘Technology and Science as Ideology’ (1970), Habermas determines that technocracy
has become a new ideology in post-industrial society (82). Not only is technology a form of social
control, but it is also an ideology in which the technocracy is so efficient and useful that it can
legitimate itself without further human reflection (1970, 105-106). Specifically, the technicization
of social interactions will replace communicative relations, or reduce the latter to technical
exchanges. The mechanical process oriented towards efficiency, convenience and standardization
will take the place of qualitative experience and meaningful interactions (Delanty & Harris 2021,
4). What’s more, the domination of technological rationality may produce ideological affects,
undermining the critical capacity of individuals. Technology is not only ‘a means of social control’,
but ‘an ideology in itself’, in the sense that the extensive expansion of technological rationality
makes people unable to think of other possibilities of society (2021, 4).
However, Habermas holds a different view from Marcuse on why such technological
rationality leads to those insidious effects. For Marcuse, the problem is not technological

28
For Marcuse, ‘technology’ has a broad sense. It means not only technical objects, but also social relations that are
produced by technologies. As he mentions in his essay “Some social implications of modern technology” (1998, 138),
technology is defined as “a mode of production, as the totality of instruments, devices and contrivances”.

40
rationality itself, but rather the ‘class society’ within capitalism where the technological rationality
is located (Feenberg 1996, 47). In other words, technological rationality as ‘historically contingent’
becomes problematic only because it is produced in a class society functioning within the confines
of capitalist logic (1996, 46). Marcuse can be characterized as both romantic and radical in holding
that technological rationality can be transformed and revolutionized into a form of ‘New Science’
and ‘New Technology’ through the elimination of class society (Habermas 1970, 87). In this new
society, technology co-exists with nature in a harmonious relationship, where nature is treated as
‘an opposing partner’ for humans to live with, rather than as raw materials that are seized and
exploited for profits (1970, 88).
In contrast, Habermas disagrees with Marcuse’s view that technology can only be
transformed into New Technology if it is not part of a class-based society. For Habermas,
technology is not historically contingent, but “a ‘project’ of the human species as a whole, and not
[…] one that could be historically surpassed” (1970, 87, emphasis in original). In the eyes of
Habermas, the problem is not technological rationality itself, nor is it class society; instead, the
real cause is the sustained expansion of technological rationality into the lifeworld (1970, 90). The
reason for Habermas’s divergent argument on this matter is that technological rationality is always
on the way, or has the internal tendency, to intrude on communicative rationality. So, the solution
is not just getting rid of a class-based society, but reviving communicative rationality’s ability to
restrain and resist the excessive expansion of technological rationality into the lifeworld (1970,
90).
We can thus see that Habermas’s critical framework does constitute a critique of
technology, which is a critical consideration for technological rationality. By comparing Habermas
to Marcuse, we also find that Habermas’s approach to explaining technological rationality through
colonization is more subtle. Marcuse regards technological rationality as a complete and static
thing: once class society is changed, technological rationality can be shaped to be good, once and
for all. But for Habermas, there is always a tension between technological rationality and
communicative rationality, in which the two are interwoven in a dynamic process. Habermas
distinguishes the system and the lifeworld to describe such a dynamic process of technological
rationality, explaining how the technological system can develop in nearly autonomous ways,
following its own inherent logics; he then shows how such a system can erode the lifeworld.

41
There is, however, a limitation to Habermas’s colonization thesis as a critique of
technology. The colonization thesis describes how such technological rationality intrudes on, and
thus shapes communications, which leads to a series of pathologies. Yet it seems that Habermas
does not say much about how specific technologies shape human communicative processes. The
critique of technology in Habermas’s colonization thesis is mainly a critique of an abstract
‘technological rationality’ or the ‘technicization of the lifeworld’, which reflects a general
tendency of technocracy as a whole in modern societies. It thus describes a larger conception or
logic of how technologies work, rather than how a particular technological artifact works. After
the empirical turn in the philosophy of technology, however, such general descriptions and
critiques of technology have been marginalized (Kroes & Mejias 2001; Franssen et al. 2016).
To make a more empirical critique of technology, I will reformulate Habermas’s original
colonization thesis by adding technology as the third steering medium, alongside money and power.
This idea of seeing technology as a third medium is largely inspired by Feenberg’s critical theory
of technology. Habermas only considers two steering media – money and power – in his
colonization thesis. As shown above, money and power are steering media in the sense that they
allow for the standardization of action coordination and its decoupling from communicatively
mediated understanding. Mediated by money and power, people do not need open discussion or
social interactions that are oriented towards common understanding, since the mechanism of
market exchange or administrative power can ensure that people can achieve ‘a mutually
satisfactory result’ (Feenberg 1996, 56). As Feenberg argues, technology operates in the same way
“in which instrumental action-coordination replaces communicative understanding” (Feenberg
1996, 67). As he shows,

In any transportation system, technology can be found organizing large numbers of people
without discussion; they need only follow the rules and the map. Again, workers in a well-
designed factory find their jobs almost automatically meshing because of the structure of
the equipment and buildings – their action is coordinated – without much linguistic
interaction (1996, 57).

As Feenberg rightly points out, it is insufficient to only use money and power to describe action
coordination in our society. In real practices of management, for instance, it is often necessary to

42
combine ‘monetary incentives’ and ‘administrative rules’ with technical instruments to pursue
goals successfully (Feenberg 1996, 57). As Feenberg puts it, unlike the media of money and power,
technology is oriented towards making the system more efficient, rather than profitable (money)
or controllable (power). Moreover, “complying with instructions in operating a machine is
different both from obeying political commands and from accepting an exchange of equivalents
on the market”. It builds in not the logic of “money (buy, not buy)” or “power (obey, disobey)”,
but of “pragmatic rightness or wrongness or action” (ibid.) If a pilot breaks the technical norms
required to manipulate a plane, for example, he or she is not only punished by law but will also
face “the natural consequences of error”, like a plane crash (1996, 59).
Nevertheless, as I argue, Feenberg’s critique of technology fails to develop its full critical
potential. By treating technology as a third medium, Feenberg develops what he calls ‘a two-level
critique of instrumentality’ (1996, 61). As he admits, the first level follows Habermas’s
colonization thesis, describing ‘the macro-sociological consequences of system expansion in
modern societies’ (1996, 63) 29 However, the second-level critique of technology that Habermas’s
colonization thesis fails to capture is how norms and values can be designed into specific
technologies. If the first level is to admit a tendency of technological rationality expanding into
modern societies, then the second, in Feenberg’s eyes, is more optimistic because technology can
be socially constructed and designed in a way that incorporates “moral and aesthetic values”
through public debate and participation (Feenberg 1995, 179). This is why Feenberg believes that
technology is not essentially repressive but can have the potential to be liberating provided it is
democratically controlled and transformed (Feeberg 1995, 232; 2001; 2002). He believes that
some “elegant designs” of technology can be realized through forms of democratic participation
that can transform the repressive power of technology (1995, 233).
I basically agree with Feenberg that Habermas’s original version of the colonization thesis
should be reformulated to enable a two-level critique of technology. I also agree that the problem
lies not in Habermas’s first level of critique, but instead in his insufficient critique of specific
technologies. Despite this, I think Feenberg seems to be a little too optimistic about the democratic

29
At this level, Feenberg rightly criticizes Latour and other social constructivists, who simply study specific
technologies in microsociology but fail to recognize the “macro-sociological consequences” of how technological
rationality intrudes on our social world (Feenberg 1996, 63). Feenberg explains this point in more detail in his
Alternative Modernity (1995), Questioning Technology (1999) and Transforming Technology (2002).

43
power in transforming technology – so optimistic that he understates the danger of how democracy
itself can be undermined by technology in the first place. Feenberg’s analysis, developed in the
early 1990s, is not able to fully account for how modern technologies, especially the technologies
of Big Data and artificial intelligence, have been shaping our democratic lives in remarkably deep
ways (Zuboff 2019). If we admit the validity of the first-level critique of technology, namely how
the lifeworld is distorted by technological rationality, we have to take seriously the view that
democracy itself can possibly be impeded by technologies, since as shown above the lifeworld
provides the resource and realm for the development of humans’ capacity for democratic
participation. In other words, if we intend to fully develop the critical potential of the colonization
thesis, we should also seriously examine how technology can shape the lifeworld and its
participatory capacity.
Hence, my reformulated colonization thesis admits a two-level critique of technology, but
unlike Feenberg, I think that the second-level critique of technology should be closely related to
the first one, rather than separate or as a parallel critique. In this light, I consider technology as the
steering medium, and observe how specific technologies – a realization of technological
rationality – shape and possibly distort the sources and conditions of the lifeworld. 30 More
specifically, seeing technology as a critical medium allows an examination of how specific
technologies steer and shape the communicative infrastructure of people’s open interactions and
exploration. To do this, I suggest incorporating surveillance studies into the colonization thesis.
The next section will introduce this notion further.

1.4 Incorporating Discipline Theory into the Colonization Thesis

Critical surveillance studies are helpful in supporting and enriching the discussion of how
particular technologies steer and shape our lifeworld. This is because the surveillance literature
can provide a range of nuanced analysis regarding how our world has transformed into a
surveillance environment, and how surveillance technologies are greatly shaping our culture,
society and senses of self. It should be noted that surveillance studies are complex, an issue that I

30
This ‘reflection’ of technological rationality means that technological artifacts bear such a potential or tendency to
colonize humans’ open interactions and explorations.

44
will address in more detail in the next chapter. Here, I only examine Foucault’s disciplinary
surveillance, one of the best-known theories about surveillance, to show how it is possible to
incorporate surveillance studies into the colonization thesis.
Before continuing, a clarification needs to be made. It would seem at first glance that I am
trying to combine Habermas’s framework with Foucault’s disciplinary surveillance. These two
thinkers employ such drastically differing theoretical approaches that it is very hard, if not
impossible, to integrate their ideas into a common framework.31 For example, each holds a very
different view on the lifeworld. For Habermas, the school and family are typical instances of a
communication oriented toward understanding in the lifeworld, but they are exactly what Foucault
criticizes. Foucault sees the school as a typical disciplinary system, used to discipline students in
order to train docile bodies. Moreover, Foucault may not be much interested in Habermas’s
applause for cultural or social reproduction, and he even criticizes them if those structural norms
are used to sustain some imbalanced discourse of power. Communicative interactions in
Habermas’s eyes are basically oriented towards consensus and mutual understanding; but Foucault
would say that social interactions are unavoidably ‘strategic’, and filled with influences of power.
As a result, my approach is not aimed at finding the internal connections between
Habermas and Foucault, nor do I seek to integrate the two viewpoints into a unified theory. As
argued before, I do not subscribe to Habermas’s historical understanding of the lifeworld – namely,
that symbolic reproduction in the lifeworld (for example, family and school) is a power-free
domain. My approach aims to incorporate some important insights from Foucault to support my
reformulated colonization thesis. Foucault’s discipline theory captures a new level of power
through disciplinary techniques, which is different from Habermas’s notion of power based on
sovereignty. This disciplinary power is not based on sanctions and violence (as in the case of law),
but is derived from a panoptic effect supported by various surveillance technologies. I argue that
Foucault’s description of disciplinary power can help enrich the colonization thesis, and serve to
aid a more nuanced view of how technological power mediates people’s behavior and self-
development in a technological society.
As has been shown, Habermas postulates two steering media, money and power, in his
colonization thesis. The monetization process has been increasingly strengthened as capitalist

31
Misleadingly, Foucault is often seen as a historical relativist oriented towards some sort of anarchism, while
Habermas emphasizes some universal and necessary norms (See Ingram 2005; Isenberg 1991).

45
societies have progressed. The principal problem is the function of bureaucratization or
administrative power. For Habermas, administrative power essentially means “the capacity of the
state to implement and enforce the law within society” (Houston 2010, 1736). In this respect,
administrative power is internally associated with law.
However, the functional importance of law has been challenged by Foucault’s theory of
discipline. In Discipline and Punish (1977), Foucault investigates the shift of power from
sovereign-based law to discipline.32 From the early 19th century, there was a profound change in
punishment, from the violent and chaotic spectacle of public torture to a more “humanitarian”
prison system. For Foucault, what is significant in such change is not its changing manners of
punishment, but rather its fundamental shift of power. That means a new mode of control comes
into being, not derived from sovereign power but instead from what Foucault calls a ‘micro-
physics’ of power – or more precisely, a disciplinary power. Unlike repressive sovereign power,
disciplinary power operates in a more hidden, opaque, and less coercive way. This disciplinary
power depends on a technique of discipline that cultivates automatic submission, or in Foucault’s
terminology, ‘docile bodies’. A highly refined form of discipline targets every detail of a person’s
body for constant training (Foucault 1977).
In Foucault’s eyes, discipline is very different from law (Foucault 1977). Law derived from
sovereign will is about commanding compliance, while discipline is associated with normalization:

Disciplines are the bearers of a discourse, but this cannot be the discourse of right. The
discourse of discipline has nothing in common with that of law, rule, or sovereign will. The
disciplines may well be the carriers of a discourse that speaks of a rule, but this rule is not
the juridical rule deriving from sovereignty, but a natural rule, a norm. The code they come
to define is not that of law but that of normalization. (Foucault 1994, 44)

Normalization is at the heart of all disciplinary systems, as disciplinary power is often exercised
over people to make them conform to certain norms (1977, 177-183). The norm, in general, is seen
by Foucault as a new form of ‘law’, which defines and represses behavior that the judicial laws do
not cover. Institutions such as workshops, schools, and armies, as Foucault shows, have various

32
We may not agree with his genealogical analysis of modernization, but his historical narrative is helpful in
conceptualizing how the power shifts in general.

46
norms concerning the “time (lateness, absences, interruptions of tasks)”, “activity (inattention,
negligence, lack of zeal)”, or “behavior (impoliteness, disobedience)” (1977, 178). Here is an
example of some norms typical in an orphanage:

“On entering, the companions will greet one another.”


“On leaving, they must lock up the materials and tools that they have been using and also
make sure that their lamps are extinguished.”
“It is expressly forbidden to amuse companions by gestures or in any other way.”
They must “comport themselves honestly and decently.”
(Foucault 1977, 178)

For Foucault, these norms are not arbitrary, but are established according to some believed ‘truth’.
This truth is often narrated by authoritative voices of scientific, medical, religious, or coercive
institutions such as the police, schools, and prisons (Waitt 2005, 174). The authoritative knowledge
is given by various ‘experts’ who define ‘truth’ about ‘normality’ and ‘deviance’, which are used
to legitimate a particular set of norms. These norms become objectified when individuals regard
them as natural and necessary, accepting them without critical analysis (Taylor 2009, 52).
From this, we can conclude that discipline is distinct from law in two main respects. First,
the mechanism of discipline does not operate based on sovereign power, but on scientific
knowledge. For Foucault’s disciplinary power, scientific knowledge (especially of psychiatry and
psychology) is particularly important since it sets up the norms – the standards of behavior – which
define what persons should do in everyday practices. Normalization, the core of discipline, refers
the individuals’ behaviors to a realm of comparison and differentiation. People’s activities are
divided into small and precise segments (such as attitudes, capacity, and performance) that are
ready to be measured according to a hierarchical ranking of values. In this structure, all detailed
behaviors of people can be evaluated as normal or abnormal, good or bad, and then the perceived
deviance can be corrected.
Second, discipline can produce an automatic functioning of power which is gentle, mostly
invisible, and less coercive. In that normalizing process, discipline makes use of an observational
technique to ensure that the behavior of subjects can be continually tracked. In the case of schools,
for example, student performance is constantly observed and recorded. What are the consequences?

47
Foucault argues that such consistent tracking and recording will lead to a panoptic effect. The well-
known panopticon is a type of prison which is circular in shape, with an observation tower in the
center. By designing the prison in this way, the prisoners are exposed to a possible constant
observation from the tower. However, the observer is invisible to the watched prisoner. This
unequal balance of visibility will make the prisoners unsure whether they are being observed at
any given moment. Consequently, it will impose on them a feeling of permanent exposure to view,
which encourages them to be continually self-disciplined continually, even if (in reality) the
watcher is absent. In other words, with the panoptic mechanism, discipline can ensure that the
prisoner is subject to self-governance from within. By recording and tracking, people will be
located in a continual process of sorting, judging and normalizing.
Foucault holds that discipline is not only different from law, but also tends to transform the
role of law according to its own mechanisms of normalization, making the law based on the
sovereign less privileged, or even unnecessary. As Hunt and Wickham (1998, 52, 60) suggest,
Foucault tends to relegate law to a subordinate role, as simply supporting disciplinary mechanisms.
As Foucault shows:

I do not mean to say that law fades into the background or that institutions of justice tend
to disappear, but rather that the law operates more and more as a norm, and the judicial
institution is increasingly incorporated into a continuum of apparatuses (medical,
administrative, and so on) whose functions are for the most part regulatory. (Foucault 1978,
144)

That means that, in Foucault’s eyes, disciplinary power is not only a new rising mode of control
that is different from the law, but it tends to incorporate the law. In this respect the law, based on
the sovereign, is simply reduced to norms, considering how scientific knowledge has now intruded
into the domain of law. The juridical field has to rely on expert knowledge to assess and diagnose
within a normative discourse. Law is not privileged anymore but only becomes, as Foucault shows,
a fusion of “legality and nature, prescription and constitution, the norm” (1991, 304).
If Foucault is right – if the disciplinary power subordinates the law and subsumes it as a
part – then administrative power will not be that important as a medium in the colonization
framework. Nevertheless, I think that even if Foucault rightly distinguishes discipline from the

48
law, his relegation of law to discipline seems to move too far. Let us first consider how Habermas
responds to Foucault’s disciplinary power:

[His] presentation is utterly distorted by the fact that he also filters out the history of penal
practices itself as aspects of legal regulation. In prisons, indeed, just as in clinics, schools,
and military installations, there do exist those ‘special power relationships’ that have by no
means remained undisturbed by an energetically advancing enactment of legal rights
(Habermas 1994, 102).

For Habermas, Foucault ignores the fact that entire disciplinary societies, such as prisons, schools,
and hospitals, are regulated by modern law. Those disciplinary relations are not special since they
have been defined by law. A criminal is put in prison to be disciplined only because he has broken
the law in the first place, and the disciplinary practices are simply part of the penal system. As
Carole Smith suggests, Foucault only identifies law with the early modern monarchical
sovereignty “where law is defined in terms of monarchical commands, absolutism and punishment”
(Smith 2000, 289). As a result, law is seen as repressive, brutal and coercive. However, as Hunt
and Wickham (1998, 60) argue, Foucault’s treatment of law causes him to neglect two important
points regarding our everyday experience of modern law: not only can disciplinary power be well
re-defined within the framework of law, but law and legal rights can protect individuals from being
coercively affected by that power. These critiques show that discipline does not relegate and
subordinate law, since law still has its important function in actively reshaping disciplinary power.
In spite of these criticisms, Foucault’s disciplinary theory does conceptualize a different
level of power, and a Habermasian explanation of colonization will be essentially incomplete if
the role of disciplinary power is ignored. Habermas, while he may agree with Foucault’s
determination of the strong shaping power of scientific experts in our society, would see that
problem only as a social pathology due to the system’s colonization of the lifeworld. That problem
is merely a part of social malfunction rather than a particular power of discipline. Such an
explanation is a typical functionalist one, which is, however, in fact insufficient. As Honneth notes,
“the question concerning the point at which objectifying attitudes unfold their reifying effects
cannot be answered by speaking of functional requirements in an apparently non-normative way”
(Honneth 2008, 55). The dynamic of power that contributes to the integration of the system works

49
not only at the legal level, but also at the individual level (subjectivity). (That is, a domain
concerning how power shapes personal behaviors, and ingrains them within its power mechanisms.)
It is thus insufficient to restrict the study of power mechanisms only to legally-based administrative
power. Besides the general and abstract legal subject, we need to investigate how the single,
concrete individual engages with the technological system.
As a result of these considerations, my approach is not going to relegate one to the other,
but instead regard law and discipline as complementary to each other. I see administrative power
as legislative, a power mainly based on the democratic sovereign that primarily applies the law to
command compliance. Disciplinary power, by contrast, depends on scientific knowledge to
identify, classify and normalize individuals, so as to impart normalization.33 Each power does not
subordinate one to the other, but rather together they constitute the domain where power is
exercised. According to this approach, I will regard the steering role of technology as a kind of
disciplinary power. Along with money and administrative power, disciplinary power drives the
system increasingly untangled from the lifeworld and colonizes the latter. This is illustrated by the
table below, which shows the theoretical framework of the colonization thesis.

Steering Media Non-communicative Lifeworld


principles
Money Profit maximization
Extension
(Monetization) Love relations
Administrative Compliance in command Trust relations
Colonization
power (Bureaucratization) Process …
Technology Normalization

1.5 Conclusion

33
A critique of algorithms from a disciplinary perspective can be also seen in the viewpoints of, for example, John
Cheney-Lippold (2017), Tobias Matzner (2017, 2019), Ruha Benjamin (2019) and Simone Browne (2015).

50
This chapter established a critical theory approach based on Habermas’s colonization thesis. It
described how money and power (administrative and disciplinary power) drive the system to
uncouple from, and colonize, the lifeworld. As shown here, Habermas is of the opinion that the
system can break from the lifeworld, because it can legitimate itself without the need for
communicative actions. As for disciplinary power, its driving force is not based on market
exchange or law, but rather on the ideology of science and technology. A disciplinary system will
define what truth is, and what norms are, according to so-called ‘scientific knowledge’, which
constitutes continual normalizing process in order to create an automatic compliance. This
disciplinary process can undermine the communicative infrastructure of the lifeworld, impeding
people’s open-minded and respectful interactions and exploration of other possibilities (including
social justice, equality, and other social values). In this sense, technology that is operating
disciplinary power colonizes the lifeworld.
In the digital society, however, the Foucauldian narrative of disciplinary power has been
largely challenged in surveillance studies. Many have accepted a post-panopticon approach that
goes beyond Foucault’s theory of discipline. Therefore, so far our task of incorporating
surveillance studies into the colonization thesis is not yet complete. With the development of
technologies, especially the advancement of Big Data technology, the theoretical framework of
colonization needs to be updated to accommodate new pivotal facts, situations and structures. The
updating of the colonization thesis will be addressed in the second chapter.

51
52
Chapter 2: Disciplinary Power in the Algorithmic Society34

In the previous chapter, I established a theoretical framework of colonization that incorporated


Foucault’s disciplinary surveillance. In that sense, the technological colonization of the lifeworld
is understood in such a way that the disciplinary power of the technological system impedes an
individual’s open interactions and exploration. But that incorporation is still incomplete, since such
Foucauldian disciplinary surveillance itself has been heavily challenged in the digital society.
Many scholars have argued that surveillance has experienced a post-panopticon turn. In his
criticism of Foucault, Gilles Deleuze points out that in the digital age the disciplinary society has
been superseded by the society of control, which aims at calculating risk rather than disciplining
individuals (Deleuze 1992, 174; see also Haggerty & Ericson 2000; Gane 2012; Langley 2014;
DuFault & Schouten 2020).
In this chapter, I will argue that the statistical calculation of risk is also a disciplinary
process, one where a kind of algorithm-based norm replaces the stable institutional rules. Therefore,
the disciplinary mode of surveillance has not declined as many scholars have thought, even though
it has changed its form into what I call the algorithmic discipline in the algorithmic society. To
demonstrate this point, I will not discuss the surveillance literature in general, but instead focus on
a particular algorithm, namely, the credit scoring algorithm. As mentioned in the Introduction,
according to Citron and Pasquale (2014, 4), the credit scoring system is a typical example of what
they call the ‘Scored Society’, which signifies a ‘scoring trend’ in the Big Data era. Thus, the credit
algorithm is a paradigm which can help us analyze the algorithmic society in general, and through
the lens of the credit scoring algorithm, we can glimpse some crucial details regarding how
disciplinary power works in the algorithmic society.35

34
This chapter is based on two articles of mine: 1) Transparency as Manipulation? Uncovering the Disciplinary Power
of Algorithmic Transparency, published in Philosophy and Technology (2022); 2) Algorithmic Discipline: How
Disciplinary Power Works in the Age of Big Data (To be submitted).
35
As Josh Lauer’s historical study of credit (2017, 5) shows, the credit systems may be among the first, as well as the
most powerful examples that used widespread data surveillance to influence human life in modern society. In the last
decade, we have also seen that credit bureaus are updating their algorithms to be more intelligent and include more
social data, which is exactly how algorithmic colonization happens.

53
This chapter is divided into four sections. It begins with a clarification of the notion of
credit, showing that credit is not just an economic fact, but a disciplinary judgment of one’s moral
or social character. This disciplinary judgment signifies the inherent nature of surveillance. I then
explain how the credit system, in the form of ‘credit panopticon’, has long been seen as a typical
case of disciplinary surveillance. After that, I closely examine three common arguments that
challenge such a disciplinary understanding of the credit system in the digital society. In closing,
I will provide three responses, showing why the Foucauldian analysis is still crucial for
understanding credit surveillance in the digital society, even if disciplinary power has operated in
a new form in the Big Data age – namely, algorithmic discipline.

2.1 The Disciplinary Judgment

This chapter will use the credit scoring algorithm as a typical example to demonstrate how
discipline and algorithms work together to steer the behavior of individuals in the algorithmic
society. Before moving to credit algorithms, it will be helpful to take a look at what credit is, and
how credit is related to surveillance and automated algorithms. That is the main task of this section.
Today, ‘credit’ is usually considered to be a financial fact. When we speak of credit, we
tend to think of money borrowed from a bank (e.g., a house loan). More generally, we might recall
a particular method of ‘buying now and paying later’ (e.g., when you buy a cellphone on credit).
Sometimes we also associate the term with “the amount of risk when lending money to a particular
person or organization” (e.g. good or bad credit).36 Nonetheless, our common use of the term in
the economic arena may hide two deeper meanings of ‘credit’. First, credit is often linked to one’s
moral character, such as honesty and integrity. Second, credit itself constitutes the unequal power
relation between creditor and debtor. I will show that these two understandings of ‘credit’ signify
that it is inherently a disciplinary judgment of one’s moral, social, and economic status.
Credit is intrinsically imbued with a moral sense of trust, or a religious sense of belief. The
financial notion of credit nowadays suggests that in human history, credit and credit relations are
simply reduced to merely economic behaviors motivated by the desire for profits and objects. But

36
“Credit” in Cambridge dictionary: https://dictionary.cambridge.org/dictionary/english/credit

54
this is not true. As Muldrew (1998, 125) points out, this purely economic interpretation of credit
did not exist until the late 17th century, when capitalism started its rapid ascendency in Europe.
Before that period, credit was often referred to in the sense of constituting “a person’s reputation,
the estimate in which his or her character is held, their repute or trustworthiness” (1998, 3).
In modern capitalism, credit tends to become an unequal judgment of one’s morality (i.e.,
being honest or not). The financial understanding of credit assumes an apolitical order of
commercial exchange in credit. In a loaning process, for example, the creditor lends money to the
creditee to make a purchase, which follows the logic of exchange — the principle of equality is
presupposed between the creditor and the creditee. In this assumed equal interaction, people build
mutual trust. This exchange narrative of credit dismisses the power imbalance of the credit/debt
relation in capitalist society.37
For Karl Marx, credit is not simply an abstract equal exchange, but is instead embedded in
exploitative capitalist relations. In his early essay ‘Excerpts from James Mill’s elements of political
economy’, Marx argues that the core of credit is still capitalistic money, and “credit is the economic
judgment of the morality of a man.” (1975, 264) Contrary to the common belief that credit is based
on mutual trust, Marx holds that the monetization process of credit will lead to unilateral judgment
where people with bad credit will not only be seen as poor, but also as dishonest and immoral.
Marx goes on:

The substance, the body clothing the spirit of money is not money, paper, but instead it is
my personal existence, my flesh and blood, my social worth and status (Marx 1975, 264;
emphasis in the original).

Therefore, Marx claims that credit is not a neutral process in capitalism. The exchange between a
creditor and a debtor is not an equal exchange and is not based on people’s free and autonomous
activities (1975, 265). The trust involved is not real trust, but a ‘distrustful calculation’ which is
reliant on sanctions, spying, and so on. In this light, the credit will necessarily require borrowers

37
Undoubtedly, the credit market can empower people to claim economic resources in modern capitalism (Krippner
2017, 1). For example, a mortgage can help poor families afford new houses. However, what I focus on is the idea
that credit judgment itself is associated with a power relation, rather than credit being used as economic resources to
reshape power.

55
to be judged and punished by the lenders, which makes the borrowers bear a constant feeling of
shame and guilt should they fail to repay.
People may argue that Marx’s critical view of credit goes too far, considering that the
extension of credit is basically a free-market transaction, which is based more on a formal equality
between borrowers and lenders than an unequal relation. Based on such a formal equality (a
contract), a borrower can just get away from the obligation and associated discipline if he or she
pays off the debt. However, the relationship of equality “is suspended during the period of time in
which the debt remains unpaid” (Krippner 2017, 9). At first glance, this effect is similar to what it
is commonly found in a labor contract, where employees have to be disciplined if they fail to
comply with the rules when they are working.
According to Graeber (2011, 120), a labor contract and a debt contract are nonetheless very
different, because the law protects workers from working permanently and workers can resign
(retreating from the contract) whenever they want. For credit, debtors cannot pull back once they
involve in the credit/debt contract, and they have to be permanently disciplined until they pay off
the debts. This is what Lazzarato calls the ‘permanent debt’, which makes ‘indebted man’ suffer
from a permanent control of his life (2011, 20). In short, unlike the wage relation that is more or
less based on formal equals, the contract of credit is inherently hierarchical and asymmetrical.
What’s worse is that the disciplinary power of credit does not simply prescribe and impose
the obligations of the debtor, but also produces a specific morality that reinforces the disciplinary
effects. As Nietzsche reminds us, the German word Schuld (guilt) is closely related to Schulden
(debt; see Lazzarato 2011, 30). Graeber also notes that for many languages in Europe, their use of
the term ‘debt’ is often associated with negative words like ‘sin’, ‘guilt’, or ‘fault’ (2011, 121).38
This moralization of debt makes it shameful and guilt-laden for those who cannot successfully pay
off their obligations. A recent example is the case of Greece during the 2010 debt crisis, who were
considered to be shameful, and guilty of ‘laz[ing] about in the sun’ (Lazzarato 2011, 31). By
moralizing debt, the extension of credit not only lets debtors be subordinated to creditors, but also
ascribes the debtors a sense of moral wrongness when they attempt to resist the unequal power
structure in the credit/debt relationship.

38
For Graeber, this moral failure related to credit/debt can be traced back to the age of slavery, when being a slave
was a way of repaying debt (Graeber 2011, 159, 199).

56
Overall, when speaking of credit, we should not understand it as a mere economic fact, but
rather as a kind of disciplinary judgment. Judging one’s creditworthiness is not only about
calculating financial ability, but also about normalizing one’s character, embodied by moral
qualities such as honesty and trust. The judgment of someone’s creditworthiness is necessarily
reliant on information collection. For a long time, credit information was scattered, and creditors
often relied on rumors and hearsay from credit applicants’ friends or neighbors to make decisions.
In 1748, for example, Benjamin Franklin wrote a famous letter ‘Advice to a Young Tradesman’,
in which he vividly showed how creditor may actively surveil his debtors, observing and listening
for proof of honesty or vice:

The sound of your hammer at five in the morning, or eight at night, heard by the creditor,
makes him easy six months longer; but if he sees you at the billiard-table, or hears your
voice at a tavern, when you should be at work, he sends for his money the next day
(Franklin 1748).39

It was only in the 1840s that recognizably modern credit reporting firms emerged (Lauer 2017, 6).
They gathered and standardized the hearsay regarding applicants, and maintained the relevant
information in massive ledgers. This data collection and analysis has established a new power
structure that is not imposed by states or legal power, but a technological power. That was the start
of a disciplinary surveillance in modern society. For the remainder of this chapter, I will critically
understand credit surveillance in the traditional form of disciplinary surveillance, and then
transform it the concept into a new form of disciplinary surveillance in the algorithmic society.

2.2 Panopticism: A Traditional Mode of Disciplinary Surveillance

In Discipline and Punishment (1977), Foucault proposes a well-known mechanism of surveillance


based on Bentham’s idea of the ‘panopticon’. The typical image of a panopticon is described as
follows.

39
Retrieved from <https://founders.archives.gov/documents/Franklin/01-03-02-0130>

57
At the periphery, an annular building; at the centre, a tower; this tower is pierced with wide
windows that open onto the inner side of the ring; the peripheric building is divided into
cells, each of which extends the whole width of the building… All that is needed, then, is
to place a supervisor in a central tower and to shut up in each cell a madman, a patient, a
condemned man, a worker or a schoolboy… They are like so many cages, so many small
theaters, in which each actor is alone, perfectly individualized and constantly visible
(Foucault 1977, 200).

Foucault notes that the panopticon “must not be understood as a dream building: it is the diagram
of a mechanism of power reduced to its ideal form” (Foucault 1977, 205). In his eyes, the
panopticon is not just an (imagination of) architecture, but a general principle of panopticism, a
technique of discipline (Elmer 2012, 231; Caluya 2010, 621).
There are two related meanings of panopticism. First, it refers to a panoptic gaze. The
watchtower of the panopticon is always hidden from the prisoners’ sight, which builds an
asymmetrical power structure in terms of visibility: the watcher can observe every movement of
prisoners, but the prisoners cannot see the watcher clearly. This particular design of prison can
produce a special effect, in that prisoners cannot be sure whether they are being watched at any
given time. Second, and most importantly, panopticism means a technique for self-discipline. The
prime function of the panopticon, through unequal visibility, is “to induce in the inmate a state of
conscious and permanent visibility that assures the automatic functioning of power” (Foucault
1977, 201). Such a panopticon can thus cultivate a self-governance or automatic compliance, with
no need for direct monitoring (Lyon 2006, 4).
In the context of credit, the disciplinary surveillance of credit systems can be described as
a ‘credit panopticon’ (Burton 2008, 114). Not only does it exert a panoptic gaze, but it is also a
technique for establishing self-discipline. By mining an ocean of consumer information, credit
surveillance functions to discipline people, or in Foucault’s term, ‘train their soul’ (Foucault 1977,
25). A fundamental principle of the credit panopticon is to single out and exclude known deadbeats,
and the threat of this can serve to discipline credit seekers into keeping up with repayments (2008,
115).

58
Credit bureaus have collected so much information that they can act as all-seeing agency,
able to identify and sort good or bad consumers. For ages, creditors were not capable of performing
such a unidirectional and omniscient monitoring panoptic gaze, since credit surveillance was not
systematic. As mentioned in Franklin’s letter, creditors in his time often picked up rumors and
hearsay from credit applicants’ friends or neighbors. The credit files were always scattered, and
were not standardized or centralized. Only around the 1840s did some early versions of modern
credit surveillance systems emerge (Lauer 2017, 6). The collected data at that time often contained
“the intimate details of one’s domestic arrangements, personality, health, legal and criminal history,
and job performance, and sometimes one’s physical appearance” (2017, 10). As Lauer shows, the
credit bureaus had mined such rich and reliable data that “local police and government often turned
to the credit bureau for help” (ibid.). Even today, the credit reporting industry is still one of the
biggest hubs for collecting and analyzing personal data.
This process of extensive data collection can itself exert disciplinary effects on consumers,
since the mere thought that they may be monitored by credit bureaus would likely change their
behavior. Besides such direct disciplinary impacts, the considerable collection of data is a source
used to exercise stronger disciplinary power through two mechanisms: social sorting and
punishments.
With massive data collection, each individual can be targeted, and thus provide an
independent credit evaluation. Even if both positive and negative information is gathered, credit
bureaus are more focused on collecting the negative facts (such as financial distress, gambling, or
drinking), so as to judge people’s creditworthiness. All these negative items will be counted as
risks of differing weights, according to normative financial expectations. For instance, if borrowers
are alcoholics, or if they repay bills slowly, they are more likely seen as a greater risk. In this sense,
credit surveillance acts as a normalizing force that divides ‘good’ from ‘bad’ credit consumers
(Langley 2014, 448). This sorting process is similar to what Foucault describes as the dividing
practice of panopticon: “the ‘bad’ are separated from the ‘good’, the criminal from the law-abiding
citizen, the mentally ill from the normal” (Burton 2012, 114).
Besides social sorting, punishments are also necessary for disciplinary surveillance.
Consumers are aware of the effects of a poor credit rating, an awareness that encourages them to
behave in ways that credit bureaus will approve of. Those who behave ‘badly’ face exclusion.
Blacklisting is a common manifestation of such an exclusion strategy. A blacklist is only a catalog

59
of names used to record those who fail to make repayments. Credit bureaus use this list to figure
out which individuals embody the worst credit risks. Blacklisting is a negative system, the goal of
which is not only to identify, but also to punish those delinquent persons and slow-payers.
Sometimes, such lists are even published by the media, giving the public knowledge of outstanding
debt. In doing so, the individuals who struggle to pay would feel embarrassed and ashamed, which
may force them to repay loans more efficiently. With this mechanism of blacklisting, consumer
behavior can be more effectively disciplined, just like “the overseer’s book of penalties, which
disciplined the worker” (Lauer 2017, 11).
As a result, the capacity for sorting and its ensuing punishments can arouse a feeling of
fear in credit consumers, which will force them to obey the rules set up by credit bureaus. This
disciplinary effect can even be amplified if a culture of credit has already been established in a
society where creditworthiness is seen as a moral characteristic, reflecting one’s honesty, integrity,
and responsibility. In such a society, people who have been denied credit are expected to feel guilty,
because that credit denial means they are morally bad – being dishonest or irresponsible (Lazzarato
2012, 30; Graeber 2011, 121; Krippner 2017, 10). This disciplinary power of credit surveillance
has long been noted by credit bureaus. As a credit expert observed in the early 20th century, “I
know of no single thing, save the churches, which has so splendid a moral influence in the
community as does a properly organized and effectively operated credit bureau” (as quoted in
Lauer 2017, 127).
In summary, with massive data collection, credit surveillance works as a ‘panopticon’ that
targets each individual by sorting and effectively quarantining bad consumers from good ones. In
so doing, it aims at inculcating norms of prompt payment, to discipline individuals to be so-called
‘good’ consumers. However, many have argued such a disciplinary surveillance has become
outdated, and that we are entering into a post-panopticon era in the digital society. I will closely
examine their main arguments here.

2.3 Three Arguments Against Disciplinary Surveillance

Foucault’s disciplinary surveillance is often seen as out-of-date in the digital society (Deleuze
1992; Haggerty and Ericson 2000; Lyon 2006). Deleuze (1992, 4) criticizes Foucault by claiming

60
that in the computerized age, the disciplinary society is at an end, and we are moving into a society
of control instead.40 Haggerty and Ericson (2000, 605) in particular use the concept of ‘surveillant
assemblage’ to replace the Foucauldian panoptic or disciplinary surveillance, arguing that
surveillance has become less focused on the individual (the specific development of particular
subject), but the ‘dividual’ or ‘data individual’, an exemplar of the masses.
In the context of credit, I suggest the term ‘credit assemblage’ to refer to a Deleuzian
understanding of credit surveillance, in which credit information is collected and analyzed not to
discipline particular individuals but only to calculate the risk of non-payment, and then perform a
control of risk over a mass of people. Before criticizing this idea of credit assemblage in more
detail, this section will closely examine three lines of argument about why and how credit
surveillance is believed to have transitioned from credit panopticon to credit assemblage.

2.3.1 The Argument of Fluid Institutions

Foucault’s description of discipline often takes place in some ‘closed institutions’, such as prison,
factory, school, or family (Matzner 2017, 31). As Deleuze shows, Foucault’s disciplinary societies
were often founded in the time period from 18th century to the early 20th century. During that
time, people were regulated by different rules or laws when they moved into different enclosed
spaces:

[F]irst, the family; then the school (“you are no longer in your family”); then the barracks
(“you are no longer at school”); then the factory; from time to time the hospital; possibly
the prison, the preeminent instance of the enclosed environment (Deleuze 1992, 3).

For Deleuze, however, these enclosed places, from prisons and factories to schools and families,
start to become fluid in a digital society. The once-solid institutions, where rules and disciplinary
techniques exist, have begun to melt into a flowing of data – only numbers in, and numbers out.
Governments or companies can collect and analyze all the data relevant to some previously

40
As Deleuze claims, “We’re moving toward control societies that no longer operate by confining people but through
continuous control and instant communication” (Deleuze 1995, 174).

61
independent and largely unrelated spaces, like prisons and schools. The boundaries between these
various physical environments have been blurred in digital world:

We’re in the midst of a general breakdown of all sites of confinement – prisons, hospitals,
factories, schools, the family. The family is an ‘interior’ that’s breaking down like all other
interiors – educational, professional, and so on (Deleuze 1995, 178).

For Deleuze, fixed ‘confinement’ has become an outdated manner in which to understand today’s
mode of capitalism, which is increasingly directed to mass consumption rather than industrial
production in closed institutions. The disciplinary society has become too inflexible to adapt itself
to a changing economy. Therefore, Deleuze proposes turning to ‘a society of control’ that is more
suitable to describe the contemporary situation. A key feature in Deleuze’s control societies is that
control often works in an abstract, remote, and numerical manner.

In the societies of control… what is important is no longer either a signature or a number,


but a code: the code is a password, while on the other hand disciplinary societies are
regulated by precepts (Deleuze 1995, 179–180).

Disciplinary societies depend on different enclosed institutions where different ‘precepts’ (like
rules or norms) are used to command and instruct particular individuals, so as to improve their
behavior. These norms, as shown in the last chapter when I discussed Foucault’s normalization,
are often based on morality, religious beliefs, or scientific knowledge. The control societies,
however, are regulated not by norms, but by ‘codes’. These codes or ‘passwords’ do not aim at
‘soul training’ to cultivate a particular kind of individual, but instead function more like Deleuze’s
metaphor of a ‘mesh’ which allows someone to pass through, while refusing others who do not
know the codes (Deleuze 1992, 4).
In the context of credit, it seems that credit consumers are not disciplined by any fixed rules
set up by credit bureaus. They just need a credit score, which functions like a risk ‘code’ or
‘password’, enabling access to the market economy, to get what they need (mortgage loans or
credit cards, etc.). This password, unlike one’s passport or ID card that is only owned and used by
oneself, can be taken and used by different individuals. If everyone had the same credit score,

62
everyone would be treated equally by the banking industry. In short, the credit assemblage only
concerns the people’s credit scores, and is not targeted at a particular individual who owns the
score. With this in mind, we can consider the next argument.

2.3.2 The Argument of ‘Data-double’

Surveillance as assemblage highlights the importance of the data individual or data double, as
actual persons become less relevant as subjects in control society. This is a crucial point that
distinguishes surveillance assemblage from disciplinary surveillance. This distinguishment
implies that disciplining real individuals does not matter much, but that the individuals’
representations (such as profiles or behavioral information) are becoming the most critical factor
for control. These individuals’ representations are what Haggerty and Ericson call a ‘data double’
– a de-corporealised body in digital surveillance (Haggerty & Ericson 2000, 611). This data double
can roughly be seen as one’s data profile (information and knowledge) collected by institutions.
The data double is so pivotal in the digital society because it is not simply a representation
of the physical self or the real individual, but has its own independent function in the surveillance
assemblage. As shown in the Diagram 1, by analyzing the data double, the institutions can acquire
new knowledge about actual people, and then determine whether they are to be rewarded or
punished socially. For example, by calculating people’s credit profiles, banks know the risks
associated with potential credit consumers, and are able to decide the interest charged to them in
mortgage applications. In this process, companies or bankers will not and do not need to care about
the particular experience of real subjects. In other words, commercial institutions can make profits
by distributing the offer or refusal of social benefits to real individuals by analyzing only the data
double. According to the idea of surveillance assemblage, such an analysis of the data double itself
will not influence or discipline the behavior of real subjects, since the disciplinary power only
works on real subjects – the real body of individuals – to improve their behavior according to
certain standards.

real subjects
control
companies data double profits

Diagram 1: The Deleuzian mode of control

63
Admittedly, this non-disciplinary mode does not mean that such a remote control on the data
double itself is not problematic. Scholars who agree with the concept of a surveillance assemblage
will be concerned that such remote control could lead to a subtle domination of individuals. The
problem of this domination is what Zygmunt Bauman described as ‘adiaphorization’, a kind of
moral indifference (Bauman & Lyon 2013, 8-9). That means that in an algorithmic society,
people’s identity as abstracted from data analysis is more important than real human beings, and
companies downplay the moral existence of real subjects.
In the context of credit scoring, the data double of an individual is his or her credit reports
(also known as credit files) and the credit score (derived from all the information in the credit
reports). Credit bureaus will only need to analyze the data double by re-calculating the credit
reports in the form of risk. By sorting the data double into different categorizations of risk, credit
bureaus are able to make a suggestion as to whether credit should be conferred, or what rate of
interest should be offered. In this process, it seems that credit bureaus only care about individuals’
credit reports, and the credit scores derived from the real subjects; they will not manage to
discipline people to be good consumers. In doing so, the idea of surveillance assemblage (or the
credit assemblage in the specific setting of credit) assumes that there is a clear-cut line between
data individual and the actual one, and analyzing the former will not influence the latter one.
However, this assumption is unsound, a problem that I will address more closely later.

2.3.3 The Argument of Risk-based Control

Deleuze argues that in the digital society, there is a shift of control mechanism from Foucauldian
‘mold’ to ‘modulation’ (Deleuze 1992, 4). A mold is an enclosed container used to shape an object
into a fixed and final form. In the case of discipline, for Deleuze, an individual’s attitudes and
qualities are molded into a particular type of person (ibid.). Unlike discipline’s molding, control’s
‘modulation’ process refers to a changing but continuous control over a wave, which describes the
‘trends’ of movement rather than a particular event. In control societies, the modulation of control
“is short-term and rapidly shifting, but at the same time continuous and unbounded” (Deleuze 1992,

64
181). In contrast, discipline is a “long-term, infinite, and discontinuous” (ibid.). As such, control
and discipline are fundamentally different:

Confinements are molds, different moldings, while controls are a modulation, like a self-
transmuting molding continually changing from one moment to the next, or like a sieve
whose mesh varies from one point to another (Deleuze 1995, 178–9).

In the context of credit, according to the idea of surveillance assemblage, the credit scoring process
is a typical form of what Deleuze calls as ‘modulation’, or a free-floating control. In the last few
decades, the traditional disciplinary surveillance that is based on the exclusion/inclusion sanction
seems to have become outdated, while a new mode of ‘risk-based pricing’ has become omnipresent
in the credit industry. For example, the FICO Score, the biggest credit scoring service in the United
States, clearly demonstrates this point:

Instead of being limited to strictly yes/no credit decisions, lenders can offer different rates
to different borrowers. Even if you’re a high-risk borrower with low FICO Scores, lenders
can decide to extend you credit you’re more likely to be able to manage, at a higher interest
rate.41

In such a risk-based pricing, the goal of credit bureaus is to control overall risk on the basis of past
credit records and the behavior of the population, rather than individuals.42 Based on the risk-based
pricing model in the population level, credit bureaus start to sort consumers into different
categories and then assign interest to each would-be consumer according to how they are
categorized. In this process, one’s credit score is not like a static ID that is exclusive to one as an
individual, but it is a dynamic and shifting category that is always calculated based on a consumer’s
own credit profile, as well as the statistics of the entire population at large.

41
See <https://www.myfico.com/credit-education/whats-in-your-credit-score>
42
As Marron characterizes the notion, “Risk is thus the apogee of a relativistic articulation of creditworthiness: its
attribution to individuals is dependent upon the specific technology of risk through which they are interpreted and the
formulation of the population within which they are located” (Marron 2009, 164).

65
All in all, many have argued that the credit scoring process should be better understood as
a surveillance assemblage rather than a disciplinary surveillance. They highlight that credit
surveillance nowadays has been detached from the real character of individuals, becoming a pure
marketing strategy to calculate the risk associated with potential credit consumers. As a result, it
appears that a credit scoring algorithm is merely used by lenders “to differentiate, sort, target and
price customers in terms of risk”, and not as a tool to discipline consumers so that they meet their
repayment obligations (Langley 2014, 7). However, these arguments are not wholly convincing,
as they fall short of noting the newly emerging situation in the Big Data age. In the next section I
will provide three responses, each arguing that disciplinary surveillance is still a relevant topic for
analysis in a digital society, even if it has changed into a new form.

2.4 Defending Disciplinary Surveillance: Three Responses

As Deleuze claims, discipline is “long-term, infinite, and discontinuous” (1995, 181). So, it seems
that discipline cannot be possible during the calculation of risk, which is a flowing and unfixed
statistical control. That’s why credit bureaus, according to the idea of surveillance assemblage,
only focus on sorting the data double into different categorizations of risks, which has nothing to
do with the discipline of actual subjects. However, I will argue in this section that the statistical
calculation of risk is inherently connected with a disciplinary process. Specifically, I will provide
three responses, showing that the understanding of credit merely as surveillance assemblage is
insufficient to capture the whole picture of credit surveillance in an algorithmic society.

2.4.1 Disciplinary Power Beyond Enclosed Institutions

In the digital society, Deleuze claims that Foucauldian disciplinary surveillance is outdated
because the disciplinary power often has to be restricted in some enclosed institutions. Nonetheless,
I will show that Foucault’s disciplinary power does not necessarily depend on fixed institutions,
but instead that it also can function beyond the physical boundaries, in the digital world.

66
For Foucault, discipline is a type of power, which is “identified neither with an institution
nor with an apparatus” (1977, 215). This disciplinary power is exercised through a whole set of
mechanisms to self-discipline subjects, which is distinguished from the ‘discipline-blockade’ that
only quarantines individuals’ bodies in certain closed places – figuring out who the ‘bad’ ones are,
that are then separated from the ‘good’; the manner of which is often direct, mandatory, and
coercive (1977, 209). This disciplinary power is a technique over the self, dealing with how
subjects are self-disciplined in a “lighter, more rapid, more effective” manner (ibid.). A typical
example is panopticism: individuals are self-disciplined by the operation of a mechanism of
panoptic gaze.43 Without being restricted to the panoptic gaze or the mechanism of visibility, this
disciplinary power over self, or self-discipline, can go beyond enclosed institutions and function
with other mechanisms in a diffused and multiple way throughout the digital society. Such self-
discipline beyond enclosed institutions can be explained in two key aspects.
First, in the algorithmic society, digital platforms function as a new form of ‘enclosure’.
Deleuze holds that Foucauldian discipline can only function within some enclosed institutions like
schools, prisons or hospitals. These spaces are all enclosed – that is, they are surrounded by a
physical boundary or barrier. It is true that the enclosure is crucial for disciplinary power to work
well, as the enclosure assures that there are relatively fixed rules or norms to discipline individuals.
As Deleuze claims, with the digitization of physical institutions, the enclosure has been dissolved
into fluid-like data which can flow between very different institutions. A credit score, for instance,
seems to resemble a kind of fluid existence, where there is no fixed shape but only numerical inputs
and numerical outputs.
In spite of this, I argue that such an enclosure is not necessarily a physical one, but can also
be realized in a digital form, as a digital platform. As José van Dijck et al. show, our society is
increasingly becoming platformatization, where our public lives are mediated by platforms like
Twitter or Facebook, and our private lives are driven by different apps like Tinder or Fitbit (van
Dijck, Poell & de Waal 2018, 2). These platforms have no physical boundaries, but they do have
a form of digital enclosure, which is not only open but also relatively fixed at the same time – as
we find with applications or particular websites. On these platforms, the developers have set up

43
As Marron summarizes, “Risk is thus the apogee of a relativistic articulation of creditworthiness: its attribution to
individuals is dependent upon the specific technology of risk through which they are interpreted and the formulation
of the population within which they are located” (Marron 2009, 164).

67
their own separate rules to govern their users’ behavior. When a video is uploaded onto YouTube,
for example, there is a list of very clear and detailed norms called ‘community rules’ that regulate
user behavior.
Credit scoring systems also have such relatively fixed shapes as different platforms. For
instance, FICO, the most popular credit scoring company in the United States, has established its
own platforms, such as its own official website and the myFICO application. The official myFICO
website has a particular ‘Credit Education’ section, a place to educate people on what FICO Scores
are and how they are calculated. The myFICO application is deployed in a manner similar to
personal fitness tracking devices like Fitbit, which help people to “be smart about [their] FICO
Scores anywhere, anytime, and it’s fun to boot.”44 In this light, even if the myFICO application as
a platform does not have a physical boundary with other platforms, it functions as a type of ‘digital
institution’ that does have disciplinary power.
We will now consider the second aspect of self-discipline beyond enclosed institutions: a
risk-based social sorting process can, at the same time, be a disciplinary process. It is true that the
governance of risk based on social sorting has become typical in a computerized society. This
digital social sorting will not depend on quarantining individual bodies in blocked sites, but rather
only focuses on individuals’ digital tracks. This data is collected and then ‘re-assembled’ to
achieve certain goals (Haggerty & Ericson 2000, 606). In this sense, as many post-Foucauldian
scholars believe, surveillance is going to merely sort data-individuals, rather than discipline
individuals to be ‘good’ in real life (Galic et al., 2017, 20-21).
This point assumes that risk-based social sorting in a digital society can be clearly separated
from discipline, and that credit surveillance in a control society is mainly comprised of social
sorting so as to govern risks, which does not discipline individuals so that they become ‘good’
consumers. In actuality, however, the two mechanisms are internally connected to each other. Even
in the Deleuzian digital society, the technique of self-discipline can work hand-in-hand with social
sorting: people who realize they are sorted into one particular category would be self-disciplined
to change their behaviors, in order to fit in a preferred category or to avoid being relegated to a
bad category (Manokha 2018, 221). For example, the airport scanning system can be seen as a
kind of social sorting, where people are sorted into those who can be allowed to pass the gate, and

44
See <https://www.fico.com/en/newsroom/fico-introduces-free-app-for-iphone-and-ipod-touch-04-08-2010>

68
those who cannot. Such a social sorting process is often associated with disciplinary effects,
considering that the social sorting process itself is also a system of punishment: if one is not
allowed to enter an airport, this can be thought of as a punishment not inflicted on him or her who
may. As a response, would-be passengers will most likely discipline themselves to be compliant
with the rules in order to take a flight.
In the context of credit scoring, the assigned score is significant for consumers, because it
enables them to access loans and borrow with better interest rates from banks. If someone, due to
his or her poor credit score, is rejected for a car loan by a bank, he or she may start to learn the
norms or rules about how to improve credit scores, and try to self-discipline to adapt their
behaviors according to the norms. Therefore, the discipline of individuals and the social sorting of
data doubles are not clearly separated, but instead often complementary to each other.

2.4.2 Beyond the Data Double: The Participatory-disciplinary Subject

A crucial argument against disciplinary surveillance is that surveillance in the digital society will
only focus on the data left behind by the individuals, rather than the training and betterment of a
real subject. This argument basically assumes that the data double can be detached from the real
existence of individuals, or at least, that the remote control over a data double will not influence
the actual subject. This assumption, however, is ill-founded. I will argue that the over-emphasized
focus on the data double ignores the significant participatory experience of real individual.
Admittedly, it is true that companies’ analysis of the data double can bypass human beings,
since they can make profits only from the particular knowledge produced from that data analysis.
However, this process has changed in the Big Data era. As Zuboff argues in her The Age of
Surveillance Capitalism, to gain the greatest profits, companies have used Big Data technologies
to not only analyze data doubles and predict consumer behavior, but also attempt to modify and
direct consumer behavior towards “their most profitable outcomes” (Zuboff & Laidler 2019).45
This behavioral modification will depend on not only the control of a data double, but also on
some disciplinary techniques on real individuals, “with subtle and subliminal cues, rewards, and

45
See <https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-
undermining-democracy/>

69
punishments” (Zuboff & Laidler 2019). The modification of behavior means that the relations
described in Diagram 1 are not sufficient, and we need to take both the control of the data double
and the discipline of individuals into consideration in a Big Data society. A more complete
description is given in Diagram 2 below:

control discipline
companies data double real subjects profits

Diagram 2: Discipline in the Big Data era

This new diagram reminds us that in real life, the data double cannot be separated from its real
subject. By cutting the line between individual and data double, commercial entities treat us solely
as numerical data, and subjects are incorrectly defined and reduced to passive ones who are simply
waiting to be evaluated and sorted by institutions. In fact, as mentioned in Introduction, it is actual
individuals, not data doubles, “who gets offered a favorable price in the supermarket, an
opportunity for social housing, or a legal penalty” (Couldry & Mejias 2019, 344). Only focusing
on the data double can lead to the neglect of the lived experience of real human beings.
In practice, the realities of the lived body are more complex than simply the reductionistic
and passive mode of the data double. I will explore the participatory aspect of the subject living
under surveillance, but I will simultaneously offer a critique of the overly optimistic idea of cultural
subjectivity that is unrestricted by power relations. What I try to argue for is a twofold type of
subject that is participatory-disciplinary in the Big Data society.
First, real individuals are often active subjects, rather than offering passive compliance. In
surveillance studies, many scholars have noticed the participatory aspect of subjects in surveillance.
In the Introduction, I have discussed how Albrechtslund (2008) uses the term ‘participatory
surveillance’ to understand the active subjectivity of citizens and consumers in surveillance. I also
mentioned the study made by Boyd and Ellison (2007, 16), which shows that users often actively
engage with surveillance platforms. Moreover, some kinds of self-surveillance enabled by the use
of healthcare applications and wearable technologies are also becoming popular in the era of self-
quantification (Lanzing 2019, 13).

70
All these theories rightly emphasize the importance of a participatory subject, in a
surveillance situation where that subject is active and engaging. As for surveillance, we should not
dismiss the participatory aspect of subjects in lived experience. Companies may focus on analyzing
and predicting humans’ data doubles to make profits, but their analysis of data doubles will
influence real individuals, considering that it is only on the basis of the data analysis that decisions
can be made as to whether an individual can get a mortgage or a loan. As a participatory subject,
rather than a passively compliant individual, people who realize they are being sorted may adapt
their behaviors to fit into a preferred category, or avoid being classified in an unfavorable one.
Second, that subjects are participatory does not mean that they are completely non-
disciplined, but rather that they are often somewhat prescribed and normalized by algorithmic
systems. It is true that subjects are involved in surveillance as increasingly knowledgeable and
active participants. However, due to its downplaying of the function of the power structure, this
overly participatory surveillance could be seen as an ‘illusion of self-control’ (Galic et al. 2016,
30). During the early years of computerized society, subjects were perhaps able to perform positive
subjectivity by, for example, getting rid of surveillance cameras. But in the current age of Big Data,
it has become harder and harder to actively resist or transform the surveillance structure. Jamie
Susskind defines a new type of power in the Big Data age as ‘perception-control’, which is applied
to “get someone to refrain from doing something” by preventing them from “desiring it in the first
place, or to convince them that their desire is wrong, illegitimate, shameful, or even insane” (2018,
142). In the Big Data society, where predictive algorithms and AI are spreading in the society, and
where the perception-control technologies are becoming more common, it seems that there is little
room for people to resist – or do not even know that they should, because the willingness to resist
may have been replaced or modified in the first place.
Perception-control and behavioral modification technologies can potentially be used by
capitalists to steer minds and behavior towards a direction that is more beneficial for the powerful.
For example, self-tracking is often seen in mHealth, where users can be seen as actively engaging
with surveillance. However, not only are users de facto being tracked continuously in the
background activity of such apps, but the solid asymmetrical power structure is one that people
can hardly challenge. That means that apparently participatory surveillance turns out to be a new
type of discipline: by active participation, people are internalizing the norms of responsibilities in

71
‘a self-induced process of self-disciplining’ (Galic et al. 2016, 30). This mode of subject becomes
a kind of participatory-discipline (‘participline’) one.

2.4.3 Discipline Through the Algorithmic Imaginary

According to the idea of the Deleuzian control society, surveillance in the digital society is a
‘modulation’ rather than ‘molding’, which dynamically regulates a risk over population rather than
disciplining a particular individual. In the context of credit scoring, I will show that in spite of a
risk-based pricing policy, credit scoring systems are still fundamentally disciplinary, since they
are still basically based on reward-and-punishment mechanism. I will also illustrate in detail how
such a disciplinary power of credit scoring systems operates through a kind of algorithmic
imagination.
For the credit scoring industry, creditors often set up a threshold to help lenders decide
whether to approve a credit application, and at what terms and interest rates. It is true that this
threshold in Deleuzian control society is often shifting according to different lenders for different
risk-control goals. It may also be true that risk-based pricing targets a population, rather than
particular subjects. However, the threshold is still set to a standard and a criterion, no matter how
shifting it is; and the financial perks (like mortgage and car loans) that are assigned or not assigned
to individuals will influence (discipline) individuals’ behavior. This is regardless of whether those
advantages are based on the risk level of the population or of individuals. That is to say, the risk-
based pricing policy in credit scoring systems cannot change their core feature, which is
fundamentally about disciplining consumer behavior based on reward and punishment. The credit
scoring system can thereby effectively discipline a consumer with the simple threat of rejecting
the credit application, or offering punitive interest rates.
Besides the threat of punishment, credit scoring systems tend to increasingly become an
incentive-based system. For instance, the FICO algorithm does not follow “knock out rules” that
“turn down borrowers based solely on a past problem”; instead, it “weighs all of the credit-related
information, both good and bad, in a credit report.”46 In this new policy, one’s credit score can
always be improved, so credit users are motivated to overhaul their credits persistently. This

46
See <https://www.myfico.com/credit-education/credit-scores/how-scoring-helps-you>

72
reward-driven feature is explicitly reflected throughout the FICO website, which is flooded with
rosy words, pictures, and videos to show how a high FICO Score can help individuals reap plenty
of benefits. 47 With rewards and punishment, credit scoring systems are basically disciplinary
systems, which intend to do more than simply calculating the risk associated with particular credit
users.
As shown with Foucault’s disciplinary surveillance, a core aspect of every disciplinary
system is to encourage individuals’ compliance with expected norms. But what are the norms for
disciplinary surveillance in an algorithmic society? And how does the process of normativity work?
This is where the algorithmic imaginary comes into play.
The term ‘algorithmic imaginary’ was coined by Taina Bucher, in her study on Facebook
users’ interaction with algorithms (Bucher 2017, 30). The algorithmic imaginary refers to “ways
of thinking about what algorithms are, what they should be, how they function and what these
imaginations in turn make possible” (Bucher 2017, 39-40). Bucher argues that such an imaginary
will change user behaviors toward utilizing algorithms. In the case of Facebook, for instance, users
may adjust their data-sharing behaviors to make full use of the perceived functions of algorithms.
In the algorithmic society, users can also exercise this algorithmic imaginary to discipline
themselves, and improve their own behavior, according to the norms set up by the disciplinary
system in question. The imaginary can be acquired from one’s own experience with an algorithm,
the sharing strategies of people who have achieved high scores, or some platforms’ deliberate
disclosure of their algorithms. With the algorithmic imaginary, users internalize those norms and
change their own behavior. Among those various sources of the algorithmic imaginary, I will
discuss how a general algorithmic transparency has successfully triggered the algorithmic
imaginary of users, since this is a crucial source of norms for credit scoring systems. I will show
that credit consumers can exercise the algorithmic imaginary to discipline themselves, and

47
For example, the headline is “Don’t let poor credit stand in the way of achieving your dreams.” There are also
several videos on the website describing how a high FICO Score can “save consumers thousands on a car loan or
mortgage and give you access to the best credit cards, higher credit limits and more.” In the booklet describing FICO’s
algorithm, a specific story is narrated to emphasize the difference between an individual with 620 FICO Score, and a
person with a score of 760: it “can be tens of thousands of dollars over the life of a loan.” See: the booklet
Understanding FICO Scores, published by FICO.

73
thereafter improve their credit scores. To do that, I focus on a specific example, namely, the FICO
Score, the most widely used credit scoring tool in the United States.
In contrast to common perceptions of the measure, the FICO scoring algorithm is not
completely black-boxed (Pasquale 2015, 3). On the contrary, the company has revealed some of
the inner workings of its algorithms, de-mystifing its scoring process. Algorithmic transparency is
often seen as a democratic value that runs counter to the pernicious influence that algorithms
appear to have on decisions we make, or decisions that are made about us (von Eschenbach 2021,
1607; Jauernig et al. 2022, 2). But it seems that it is precisely through this seeming act of
transparency that the disciplinary power of the FICO Score effectively works. Algorithmic opacity
greatly limits the disciplinary effect of credit surveillance: credit users could not know what credit
bureaus expect them to do to improve creditworthiness, so they would simply passively wait to be
evaluated (Creemers 2018, 27). Transparency about how credit scoring systems work, on the other
hand, lets consumers know not only how the algorithms operate, but what norms should be
followed. What algorithmic transparency does is open the black box, letting consumers know what
the rules are, so that people can actively participate in the disciplinary system.
For example, the FICO Score System discloses three types of explanation about how its
algorithm works: “1) the categories of data collected, 2) the sources and techniques used to acquire
that data, and 3) the specific data points that a tool uses for scoring” (Hurley & Adebayo 2016,
204, 213). These disclosed explanations may seem to be scientific and objective information about
the FICO scoring algorithm. Nonetheless, the information is transformed into normative rules
when arranged intentionally in a normative discourse. For example, with the informational
disclosure of its algorithm, the FICO Score clearly explains the specific data points used for scoring,
and all the data points are listed by order of relative significance. Based on these data points, FICO
compiles the characteristics of the so-called ‘FICO High Achiever’, referring to the characteristics
of consumers from the highest range of the scores:48

Payment History: About 96% have no missed payments at all; Only about 1% have a
collection listed on their credit report; Virtually none have a public record listed on their
credit report.

48
See < https://www.myfico.com/credit-education-static/doc/education/myFICO_UYFS_Booklet.pdf>

74
Amounts Owed: Average revolving credit utilization ratio is less than 6%; Have an average
of 3 accounts carrying a balance; Most owe less than $3,000 on revolving accounts (e.g.,
credit cards).
Length of Credit History: Most have an average age of accounts of 11 or more years; Age
of oldest account is 25 years, on average.

The listed characteristics of the ‘FICO High Achiever’, derived from the different data points, have
led to the building of an ideal model for credit consumers. This ideal model can subtly motivate
credit applicants to identify themselves with the detailed characteristics of being a FICO High
Achiever. This motivation works well, because the FICO Score is essentially a disciplinary system
within which consumers tend to be ready to improve their credit scores under a reward-and-
punishment mechanism.49 This normative power is explicitly found in the ‘tips’ provided by the
FICO System that help people improve their scores. Instead of just describing what the ideal model
is, these ‘tips’ are concrete norms given to directly inform people how to identify themselves with
the model. Some specific tips are:50

Pay your bills on time.


Avoid having payments go to collections.
Keep balances low on credit cards and other ‘revolving credit.’
Don’t close unused credit cards in an attempt to raise your scores.

The seemingly scientific features of the FICO algorithm have successfully transitioned from what
is, to what should be. By listing the ranking of the relative importance of the data points, for
example, there has been a subtle establishment of a set of standards regarding the behaviors that
are commendable, and those that are not. These norms may seem to be less coercive: people can
obey or dismiss them. But considering the strong set of rewards and punishments involved,

49
To be sure, the real situation can be more complex. For example, some rich people may not be much concerned with
their credit scores, as they have might have plenty of cash. Some poorer people may not conform to what the norms
dictate, even though their credit scores are low. But generally, as a rational agent, within a disciplinary context, people
tend to discipline themselves to obey the rules in order to avoid punishment and receive more benefits.
50
Source: myFICO website https://www.myfico.com/credit-education/improve-your-credit-score

75
consumers as rational individuals will try to better their positions by trying to take advantage of
the various systems. Especially for those who are financially vulnerable, there would be an
incentive to become more likely to be normalized, in order to gain financial advantages. For
instance, upon learning that ‘payment history’ will be considered in FICO’s algorithm, individuals
will tend to make prompt repayment to improve their credit scores. In so doing, the FICO Score
System inculcates various norms of responsibility in consumers’ mind.51
By a sort of algorithmic transparency, credit scoring systems prescribe the de facto norms
for users to follow, informing them how to make full use of the algorithm to improve their scores.
These normative rules are not derived from the law or moral imperatives, making no reference to
nature (as many laws do) or to morality (about what one should do or be). Rather, they are often
constructed according to a science of algorithm and how algorithms work. Admittedly, the
disclosure of the inner workings of FICO’s algorithm is far too general. The FICO Score has indeed
explained the relative weight of the five categories in its algorithm, but it does not “tell individuals
how much any given factor mattered to a particular score” (Citron & Pasquale 2014, 17). The
upshot, however, is that credit scoring systems disclose how their algorithms work in such a
general way that it is possible to successfully arouse people’s algorithmic imaginary to internalize
these norms into their own changes of behavior.

2.5 Towards Algorithmic Discipline in the Big Data Era

Up to this point, I have clarified why the disciplinary power of credit surveillance is still possible
and relevant in the digital society. At the same time, I have also shown that Foucault’s disciplinary
surveillance should be updated to accommodate some crucial changes in the era of Big Data. As a
result, I construct a new type of disciplinary surveillance in the algorithmic society, which I term
algorithmic discipline. Algorithmic discipline means that individuals who are subject to an
algorithm-based system (a scoring system, for example) tend to participate in self-evaluation and
change their behavior to meet the expectations defined by algorithms.

51
That FICO Scores discipline consumer behavior with transparency has been proved to be effective in a recent
empirical study by Homonoff et al. (2018; see pp. 23-24).

76
Algorithmic discipline is different from Foucauldian disciplinary surveillance, since
algorithmic discipline works not in some enclosed institutions (such as a school, prison, or factory),
but on digital platforms which are more open environments. Meanwhile, algorithmic discipline is
also distinguished from the Deleuzian control society or surveillance assemblage, because
algorithmic discipline is not only a tool to discriminate between customers in terms of risk, but
can also directly modify consumer behavior.
In this light, Karen Yeung’s notion of ‘hypernudging’ (2017) is one form of algorithmic
discipline, but a different, weaker form of disciplining than that I have discussed. Modifying
people’s behavior by the concept of ‘nudging’ usually involves changing the architecture of choice,
which is a soft control that does not need outside force or some huge economic perks. For example,
a grocery store can promote a heathier diet by putting, for example, some vegetables instead of
chips in a more accessible place. This change of layout or the order of food in a store is a kind of
changing the architecture of choice. Yeung’s notion of ‘hypernudge’ refers to a more advanced
nudge nowadays, in which Big Data technologies are used to steer individuals’ ‘attention and
decision-making’ toward the directions favored by a ‘choice architect’ (Yeung 2017, 118). The
basic idea of the hypernudge is similar to a regular nudge: it directly shapes one’s behavior only
by changing the choice architecture, without force or coercion (Thaler & Sunstein 2008, 2013;
Nys & Engelen 2017). 52
In contrast, algorithmic discipline is distinct in several aspects. First, algorithmic discipline
does not impose directly on people’s behavior, but works through norms (e.g., algorithm-based
norms). Norms should be defined, and explicitly stated so that people can abide by them. Therefore,
norms do not aim at directly and immediately changing an individual’s decision-making process,
but rather by emphasizing a long-time training process. Second, algorithmic discipline depends
heavily on a punishment-and-reward system. Algorithmic discipline has been shown to become
increasingly ‘soft’ as it appeals to the participatory subject, with incentive mechanisms.
Nonetheless, the threat of punishment is never completely absent. Third, algorithmic discipline
needs some degree of transparency to work effectively. Whatever the norms or the punishment-
and-reward mechanism, algorithms can be used to discipline successfully only when they are

52
Whether the hidden-ness is a necessary characteristic of nudges is contestable. Usually, nudging is often dependent
on the hidden nature of the change in the choice architecture. If people already knew that their choice architecture had
been changed, the nudging effect would be most probably ineffectual.

77
known and understood by the very people who are targeted. Otherwise, subjects would either feel
a lack of motivation toward active participation, or would find it difficult to know what norms they
should be following.

2.6 Conclusion

This chapter tries to further strengthen the framework of colonization, by exploring a new mode
of disciplinary surveillance in the algorithmic society – algorithmic discipline. By instilling what
is normality and what is deviance, normative expectations become embedded in algorithms. And
then through punishment and reward mechanisms, people are disciplined to conform to those
norms. This algorithmic discipline can modify individuals’ behavior, but this behavioral
modification is rather a long-term form of training, where a continual disciplinary process is
needed to reinforce the new behavior, ultimately making behavior more controllable.
What is noteworthy is that algorithmic discipline is not in itself wrong. When AI algorithms
are applied to diagnose diseases, for example, doctors need to follow some rules and norms
instructed by the algorithms in order to operate the AI systems properly. With regard to credit
scoring systems, algorithmic discipline may also be less problematic if it is only used in
53
disciplining our financial life. However, algorithmic discipline may become seriously
problematic when it is applied to the areas where we need open interactions for, so as to explore
together with others the specific meanings of the good life. This intrusion of algorithmic discipline
can lead to what I call algorithmic colonization, which can restrain and even undermine people’s
open-minded and respectful interactions with others. In Part II, I will focus on cases of love and
trust, to show how algorithmic discipline colonizes our most intimate relations and the crucial
value of trust in democratic societies.

53
Algorithmic discipline may provide an effective way to reduce creditor risk by controlling the borrower more
steadily, so as to discipline them into being good consumers (with prompt payment, for example). However, some
scholars think that it is problematic in itself when credit scoring algorithms discipline consumers into some financial
identity that focuses on calculation and money (Lauer 2017; Lazzarato 2012).

78
PART II: Case Studies: Algorithmic Love and Trust

79
80
Chapter 3: The Algorithmic Colonization of Love Life54

Nowadays, numerous dating platforms are deploying so-called ‘smart’ algorithms to identify a
greater number of potential matches for a user. These AI-enabled matchmaking systems, driven
by a rich trove of data, not only predict what a user might prefer, but also deeply shape how people
choose their partners, and change attitudes toward connection, intimacy, and love. In this study,
the term of ‘love’ refers to the most intimate bond between individuals rather than family love or
one’s emotional attachment to animals or objects (Kottman 2017, 20). Based on the reformulated
Habermasian colonization thesis, this chapter will critically explore the insidious influence of the
delegation of romantic decision-making to an algorithm. In Habermasian terms, love and romantic
relationships are crucial parts of our lifeworld. Dating applications act as systems that encroach
into and thereby transform our lifeworld, structuring our romantic relations by their own priorities
of automation, decontextualization and commodification. Such encroachment has the potential to
result in what I call the ‘algorithmic colonization of love’, where the disciplinary imperatives of
algorithms are stretched into the romantic domain and crowd out the interactive part of love.
I propose to maintain autonomy in order to make algorithms serve our love relations, rather
than the opposite. I understand that discussions of love seem to be not so much about issues of
autonomy as interpersonal affection and emotional attachment, since love, a more processual than
a decisionistic phenomenon, means to interact and explore the meaning of intimacy with others.
Nonetheless, the concept of autonomy here does not mean an individualistic autonomy, but rather
a relational one, the general idea of which means that one’s identity is developed within social
relationships and shaped by social forces (Mackenzie & Stoljar 2000; Christman 2004). Relational
autonomy can be seen as “an umbrella term”, which highlights “a more realistic, social picture of
autonomy” (Lee 2022, 97, emphasis in original). To be more specific, I use the term ‘dialogical
autonomy’ to describe a particular kind of relational autonomy which emphasizes the dialogical
and interactive aspect of autonomy. Some scholars also use ‘dialogical autonomy’ to discern the

54
This chapter is based on three articles of mine: 1) Algorithmic Colonization of Love: The Ethical Challenges of
Dating Apps Algorithms in the Age of AI. Techne: Research in Philosophy and Technology. (Accepted); 2) AI
Manipulation of Love? Governing AI-driven Coaches in Surveillance Capitalism (In preparation). 3) Transparency
as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency. Philosophy & Technology. 35(69):
1-25. https://doi.org/10.1007/s13347-022-00564-w

81
dialogical process of autonomy (Lee 2022; Westlund 2011). They point out that “autonomy
requires dialogical disposition to hold oneself answerable to external critical perspective” (Lee
2022, 97).
In this chapter, I basically share the standpoint that it is crucial for autonomous agents to
“engage and participate in interpersonal discussion or discursive exchange” (98), but I will develop
the dialogical account of autonomy based on Habermasian communication theory, and apply such
dialogical autonomy to examine the specific issue of AI-driven love relations. In love relations,
such a dialogical autonomy as a constitutive part of the lifeworld means that people are free (or
should be free) to explore together with others the specific meanings of intimacy and the good life.
Algorithmic colonization of love can be wrong when algorithmic imperatives impede such a
dialogical autonomy, making users unable to develop love relations with others via open and
respectful interactions, but only through some non-negotiable norms prescribed and manipulated
by dating algorithms.
This chapter is divided into six main sections. It starts with a description of how algorithms
and AI have been increasingly used to suggest and steer human love relations in the Big Data era.
Sections two, three and four will closely consider how dating algorithms restrain individuals’ open
interactions and exploration in the cultural, social and personal dimensions. Section two focuses
on how dating algorithms can cultivate a dating culture that is less interactive and participatory.
Section three examines how user data on intimate relations – an important condition in assuring
people’s open interactions with others – can be exploited and manipulated by surveillance capital,
which may impede and distort the healthy development of human love relations. Section four
investigates how dating algorithms can drive subjects to become increasingly trapped in a love
filter bubble, failing to see other possibilities of identity. In Section five, I attempt to discover a
dialogical process of autonomy in the concept of lifeworld; at the end of this chapter, I propose to
retain such a dialogical autonomy in algorithmic love, so as to ensure that people can openly
interact and explore with others in intimate relations.

3.1 Love Mediated by ‘Smart’ Algorithms

82
During the COVID-19 pandemic, dating applications exploded in popularity. In March 2020,
Tinder made a new daily record: 3 billion swipes. From March to May 2020, the usage of OkCupid
spiked by 700%; in the same period, Bumble’s video calls alone surged by 70% (Chin & Robison
2020). 55 However, searching for love online was already popular before March 2020, and the
pandemic only accelerated the trend. According to a 2019 Stanford study, online dating has
displaced “the intermediary roles of friends and family” as the number one way that American
people meet potential partners (Rosenfeld et al. 2019, 17753). For such a popular service, the
specific inner workings of online dating platforms are still rather obscure to most users. But the
general idea is simple: to transform romantic love – humankind’s most intimate relation – into a
mathematical algorithm.
As early as 1965, researchers used punch cards to record questionnaires about who an ‘ideal
date’ might be (Bridle 2014).56 The information was then fed into a five-ton IBM computer to
suggest potential matches. People believed the big machine for matchmaking was ‘The Great God
Computer’: it would “know something that [people] don’t know” (Bridle 2014). 57 With the rise of
the Internet in the 1990s, computer matchmaking turned into online dating. Many companies
established online platforms for Internet users to post self-advertisement, as well as to browse other
user profiles (Finkel et al. 2012, 3). The matching process of these dating websites was more or
less random, since the profiles presented to users were more or less a random grouping of people.
Over the last two decades, dating algorithms have been updated to be more predictive.
Online dating companies have launched matchmaking algorithms to filter partners based on their
profiles. After users answer a questionnaire, algorithms analyze and predict suitable dates, and
suggest matches automatically. These algorithms were often claimed to be a more scientific and
reliable way to find soulmates, in contrast to human judgment (Rudder 2014).58 In the 2010s, with
the popularity of smartphones, dating applications dominated the online dating industry. Their
profiling algorithms have been largely simplified: users can be matched by merely swiping

55
Retrieved from <https://www.brookings.edu/blog/techtank/2020/11/20/this-cuffing-season-its-time-to-consider-
the-privacy-of-dating-apps/>
56
Retrieved from <https://www.theguardian.com/lifeandstyle/2014/feb/09/match-eharmony-algorithm-internet-
dating>
57
Ibid.
58
Retrieved from <https://www.youtube.com/watch?v=m9PiPlRuy6E>

83
through headshots (Schwartz & Velotta 2018, 57). While online dating algorithms tend to be
increasingly ‘intelligent’ because they incorporate more machine learning, many dating apps are
deploying AI to create different sorts of smart matchmaking, where users need no longer swipe
the screen (Agrawal 2021).59 These AI-driven dating algorithms do not specify what makes a good
match. They simply match people according to a correlation inferred from the large trove of data
gleaned from user engagement with the application.60
Recently, there is a new trend in the online dating market: instead of simply recommending
potential matches, AI-driven love coaches are deployed to help users with their entire online dating
journey, from profile creation and texting strategies, to managing almost every detail of romantic
interaction. A prominent example is an AI dating chatbot named ‘Lara’, developed by Match.com.
Applying natural language learning to interact with users, Lara guides users on what to wear, how
to flirt, how to apply makeup, and so on, so as to increase the chance of dating success (Li 2019).61
eHarmony has also created an advanced AI which can analyze every detail of a user’s chat history,
and provide them with personalized advice about how to make the next move (Tuffley 2021).62
Loveflutter’s AI system can even coach users on when to start physical dating and which
restaurants, bars, or clubs to go to (Agrawal 2021).63 As more data is generated online, it seems
likely that dating algorithms might become even smarter in the near future. Eventually, as Tuffley
(2021) imagines, a future AI could create a wholly virtual partner who behaves just like a real
person. 64

59
Retrieved from <https://www.forbes.com/sites/forbestechcouncil/2021/02/11/exploring-ai-revenue-generation-
opportunities-in-dating-apps/?sh=2b42b42476f9>
60
As Happn’s CEO Didier Rappaport explains, “[we] do not believe that if you like the same movie, you will fall in
love. [Our algorithm is] more based on data we have” (Ghosh 2017). Retrieved from
<https://www.businessinsider.com.au/happn-is-getting-paid-subscriptions-and-will-use-ai-to-recommend-matches-
2017-4?r=US&IR=T>
61
Retrieved from <https://syncedreview.com/2019/06/15/the-online-dating-industry-loves-artificial-intelligence/>
62
Retrieved from <https://theconversation.com/love-in-the-time-of-algorithms-would-you-let-artificial-intelligence-
choose-your-partner-152817 >
63
Retrieved from <https://www.forbes.com/sites/forbestechcouncil/2021/02/11/exploring-ai-revenue-generation-
opportunities-in-dating-apps/?sh=2b42b42476f9>
64
Retrieved from <https://theconversation.com/love-in-the-time-of-algorithms-would-you-let-artificial-intelligence-
choose-your-partner-152817 >

84
As dating applications become more efficient in matchmaking, they can create new
romantic experiences. Researchers Josue Ortega and Philipp Hergovich found that, traditionally,
people may not date close friends, but they do tend to meet with those to whom they are loosely
connected, such as a friend’s friend or a friend of parent’s friend, and so on (Ortega & Hergovich
2017, 1). In this sense, traditional dating often depends on at least some sort of weak tie enabling
‘bridging’ with people from “other clustered groups, allowing us to connect to the global
community” (MIT Technology Review 2017).65 Nevertheless, dating platforms operate on a very
different logic: “People who meet online tend to be complete strangers”, and “when people meet
in this way, it sets up social links that were previously nonexistent” (ibid.). In this light, these
dating platforms increase connections between people, providing a larger dating pool where people
have more choices in finding potential partners. This large dating pool is valuable, especially for
those who are part of a minority in our society, such as the LGBTQ+ community, immigrants,
disabled people, and so on.
I also do not deny that these dating platforms can make our dating experience more efficient
and more enjoyable. In the past, dating required plenty of time and energy. People have to know
how to dress well, how to find a restaurant, how to plan for romantic activities, etc. Some people
may also feel very nervous in meeting potential partners, especially for first dates. The various
dating applications can reduce those complicated dating procedures into far simpler actions.
Tinder’s swiping algorithm requires users to choose potential dates only by swiping on profile
photos. Some ‘smart’ dating platforms even do not require users to swipe mugshots, as they can
train their AI to determine a user’s ideal date and recommend matches automatically. As
mentioned above, AI love coaches can also guide users and give advice on how to manage love
relations after matching, and ‘smart’ algorithms can help users keep good intimate relationships
with partners (Li 2019).66 All these examples illustrate that dating platforms can make our dating
lives less strenuous, less difficult, and more fun.67

65
Retrieved from <https://www.technologyreview.com/2017/10/10/148701/first-evidence-that-online-dating-is-
changing-the-nature-of-society/>
66
See <https://syncedreview.com/2019/06/15/the-online-dating-industry-loves-artificial-intelligence/>
67
It seems that as long as one is happy to adapt to and stay within the bounds of the normalizing standard, these
platforms seem to necessarily apply.

85
A serious concern raised in this study is that some potential insidious consequences could
be realized when we delegate our romantic love relations to an algorithm. Dating apps are
becoming the world’s most controlling matchmakers, deciding on a large number of young
couple’s love lives. The negative influence of dating apps can be amplified to “change the nature
of society” (MIT Technology Review 2017). In proceeding further, I will not focus on some of the
serious problems that have been raised by scholars, such as issues of privacy and discrimination
(Lutz & Ranzini 2017; Farnden et al. 2015; Green 2021). There have also been disclosures about
application algorithms that have biases against particular genders, races, and groups (MacLeod &
McArthur 2019; Stacey & Forbes 2022). I want to focus on a different problem here.
In this chapter, I will adopt Habermas’s colonization thesis to critically reflect on the ethical
challenges of letting algorithms shape and steer our love lives. It would seem that Habermas
himself never directly analyzed the issue of romantic love, but we can still use his colonization
thesis as a normative model to critically interrogate algorithm-enabled love relations.68 Again, as
argued in Chapter 1, I will not adopt Habermas’s original version of the colonization thesis, due
to its historical and romanticizing nature. Instead, I will seek to understand the lifeworld
normatively, seeing it as a way to ensure open and respectful interactions and explorations. Love
relations, a crucial part of the lifeworld, is an interactive process which involves “a mutual
revelation of private concerns and sharing of cherished emotions” (Anderson 1990, 185). When
mediated by algorithms, the interactive process of love can be structured by algorithmic discipline,
which will disallow open-minded self-expression and intimate interactions.
This chapter will not establish an over-arching theoretical framework concerning the
algorithmic colonization of love. Instead, it focuses only on describing three main types of insidious
consequence possibly caused by the algorithmic colonization of love, in the domains of culture,
society, and personal development.

3.2 The Objectification of Algorithmic Love

68
Sergio Costa notes that “Habermas rarely refers to the issue [of love]” (2005, 4). Westphal also poses the question,
“How important is it that it never occurs to Habermas to speak of love?” (Westphal 1998, 14).

86
Dating application algorithms can make romantic interactions more efficient, but at the same time
they can reduce such interactions to not only commercial products, but also to a set of codes and
mathematical formulae – that is, an algorithm. When the intimate relations of human beings are
commodified and algorithmically colonized, dating partners might regard each other only as
‘things’, rather than human beings that they can meaningfully interact with. Moreover, due to its
automation and decontextualization, the mutually interactive part of love may be reduced to a one-
directional judgment and an automatic enforcement, which leads to a culture of objectification in
the algorithmic society.
Commodification is not a new issue. Many scholars have diagnosed the tension between
romantic relations and utilitarian-economic logic. In Eros and Civilization (1974), Herbert
Marcuse argues that romantic fantasies have been commercialized with the growth in mass
consumption of romantic experience. As a result, the commercialization of love produces ‘pseudo-
needs’ and eliminates possibilities for human emancipation. Zygmunt Bauman formulates a
particular critique of the commodified form of love. As he states in Liquid Love (2007), “just as
on the commodity markets, partners are entitled to treat each other as they treat the objects of
consumption… partners are cast in the status of consumer objects” (2007, 21). He goes on to state:

Getting sex is now ‘like ordering a pizza… now you can just go online and order genitalia.’
Flirting or making passes are no longer needed, there is no need to work hard for a partner’s
approval, no need to lean over backwards in order to deserve and earn a partner’s consent,
to ingratiate oneself in her or his eyes, or to wait a long time, perhaps infinitely, for all
those efforts to bring fruit (Bauman 2010, 22).

As such, the commodification of love is problematic because it can result in an alienation of love
relations. When pleasures and sexual desires are all mediated through the buy and use of a
commodity, people treat potential partners only as commercial products rather than as someone
with whom they need to communicate. Such commodification results in an “impairment of
interhuman bonds” and especially “the pulverization of love relationships” (Bauman & Mazzeo
2012, 117).
It is accepted that the critiques of commodification by Marcuse and Bauman are contested,
because they assume that a true or genuine love exists that is not influenced by any instrumental

87
and commercial rules. That is why Marcuse argues against ‘pseudo-needs’, and why Bauman
argues for a non-commodified love. But as Eva Illouz shows in Consuming the Romantic Utopia
(1997), at times commodification can be beneficial for intimate interactions. For example, if a girl
likes flowers, her partner can buy flowers for her – which is often seen as an example of
commercialization, despite helping the couple to develop their romantic relationship. I will
therefore not subscribe entirely to Marcuse and Bauman, and their descriptions of commercialized
love relations. Instead, I only take the point that instrumental rules can have the potential to impede
intimate interactions.
Besides commodification, I will show that online dating algorithms also have the
possibility to hinder individuals’ romantic interactions and lead to a new type of loss of meaning:
algorithmically colonized dating culture. In this dating culture, users’ romantic experience is
algorithmically transformed into one that is less interactive and less participatory.
As Tinder’s founders have claimed, their application was intentionally designed to offer a
fluid experience: using Tinder is like playing a ‘game’ which not only takes “the stress out of
dating”, but causes people to not invest much time and emotion into their dating experiences
(Stampler 2014).69 That is why Tinder profiles are designed to be “similar to a deck of playing
cards, and love, sex and intimacy are the stakes of the game” (Hobbs et al. 2017, 272). Let’s take
a closer look at Tinder’s swipe algorithm. It is designed such that users can only swipe either left
or right, thus expressing a wish to date the user in question or not. Swiping right enables a
possibility for the date, while swiping left completely rejects the algorithmic match. As David and
Camber (2016, 4) argue, such binary logic in Tinder’s swiping algorithm restrains the meaningful
expression of intentions between potential daters. The quick swiping action makes users forget
about the real human beings behind the photographs. As Haywood argues, in swiping user profiles,
we tend to treat each other as objects rather than as people with potential for interaction:

The process of swiping through multiple profiles can be seen as similar to browsing a
shopping catalogue, where the process of choosing is an affective experience... Women
become treated as a product, considered comparatively against other women (Haywood
2018, 145-146).

69
See <http://time.com/4837/tinder-meet- the-guys-who-turned-dating-into-an-addiction/>

88
The swipe algorithm represents the core of algorithmic love, namely, a kind of separateness and
reduction. In algorithmic terms, people are only numbers while romance is about mathematics.
One of OkCupid’s founders Christian Rudder describes the state of affairs like this: “take
something mysterious – human attraction – and break it down into components that a computer
can work with” (Rudder 2016).70 Tinder’s algorithm centers on ‘desirability’ scores (ELO scores),
which are used to calculate how ‘attractive’ any particular user is. But what does the ‘attractive’
mean? For Tinder, attractiveness is reduced to one’s appearance only. By design, a user’s decision
to ‘like’ or ‘dislike’ is made solely on the basis of a profile picture. Application users are thereby
nudged to pick potential partners only according to physical appearance. People with good looks
will tend to be swiped with more ‘likes’, and because they are seen as more ‘attractive’, they will
have higher desirability scores. The algorithm therefore reduces complex intimate interaction to a
decontextualized numerical score.
Love is fundamentally a complicated, interactive process. Dating algorithms may capture
some aspects of human attraction in romantic relations, but it appears that they may never grasp
the full complexity of that romantic experience. However, for dating algorithms, the complexity of
romantic feeling is of little concern, unless it somehow makes romantic interaction more efficient
and more easily handled. Dating platforms do not even actually have an interest in lasting
relationships between people, since “every successful romantic couple means a loss of two
customers” (Joe 2016).71 Different dating platforms use their respective algorithms to capture one
or several aspects of love (e.g., attraction due to appearance), and place their products in the market
with the aim of satisfying investors and making profits. In this sense, the core of all dating
algorithms is business. For Tinder, its swipe algorithm reduces dynamic human attraction to
physical appearance alone, which makes using the application as easy as possible, attracting more
users.
Some may argue that Tinder’s swipe algorithm merely reflects a long-existing culture in in-
person dating. It is true that physical appearance is an important facet in offline dating. But as
Illouz (2007, 104) points out, the ‘first impression’ in real life is a ‘holistic’ experience:

70
See “Inside OkCupid: The math of online dating” <https://www.youtube.com/watch?v=m9PiPlRuy6E>
71
See < https://medium.com/@joemmackenzie/love-is-an-algorithm-cf764b1eeae4>

89
The Internet provides a kind of knowledge which, because it is disembedded and
disconnected from a contextual and practical knowledge of the other person, cannot be
used to make sense of the person as a whole… Face-to-face encounters cannot be reducible
to a set of attributes; rather, they are ‘holistic,’ that is, in them we attend to the
interconnectedness between a wide variety of attributes, rather than to each discrete
attribute.

When people swipe photos on Tinder, their experience is often static and discrete: they only
“experience their body for how it looks rather than how it feels” (Breslow et al. 2020, 26). The
swipe algorithm tends to produce a culture that only spotlights users’ fixed profile photos, rather
than their flowing experience of emotion. In real-life dating, physical appearance is not a fixed and
static attribute, but rather involves in a flowing experience contextualized by the dynamic process
of dating, in which the relevance of one’s beauty is always changing as intimate interaction moves
on. When meeting someone at a bar, for instance, people may not be attracted by external looks at
first sight. Rather, via interaction, it is possible that they can explore and gradually recognize, and
become attracted by some other more internal qualities, such as being humorous or responsible,
etc. Just as Illouz (2007, 105) shows:

This is why we often fall in love with people who are very far from our prior notions, or
why, when in love, we are willing to disregard an element which does not match our
expectations, precisely because we attend to the whole, rather than to its parts.

In contrast, with a feature such as Tinder’s swipe algorithm, for instance, physical attractiveness
becomes the one and the only factor that users utilize in making a decision. That swipe algorithm
cannot allow its users to have such a dynamic and holistic experience in a real-world love relation.
In this light, when contextualized love is reduced to some separate and fixed attributes,
mutual interaction is crowded out of romantic relations. Thus, the love relation tends to become a
sort of objectified judgment and selection process. As has been shown, dating algorithms often
reduce the complex love experience to a set of fixed attributes for algorithms to quantify. These
attributes can be physical appearance, personality, genes, blood type, credit scores, or even
astrological qualities. Based on one or several of these attributes, various dating algorithms match

90
their users with each other in different ways. These attributes, originally associated with mutual
human interactions, are becoming the deciding criteria that users utilize to make unilateral
judgment and selection. This process of judgment and selection is not interactive, and has no need
for dialogue or negotiation. For Tinder’s swipe algorithm, attractiveness is almost the only deal-
breaker: without beauty, there is no further communication. There is no negotiation or dialogue
involved; nor is there any chance for ‘plain-looking’ users to fully express themselves. In this sense,
algorithms de-contextualize the interactive process of dating, restraining individuals’ thinking and
exploration of other more internal characteristics of possible partners.
In the case of credit score dating apps, romantic ‘attraction’ is reduced to an even more
abstract and detached attribute – the credit score. Here, the objectification of love driven by
algorithms is even more obvious. In the case of CreditScoreDating, for instance, the dating
platform provides a list of advice in which credit scores and dating relations are automatically
matched. This shows how love decisions are mechanically translated into metric credit scores. A
good score means people are more attractive, while a low score implies that people are less
appealing: “anything below 600 is RUN”.72 So, if someone scores 550, the likelihood of getting a
date is reduced, since potential dates would have already ‘run away’ when assessing credit score.
In this example, people tend to only focus on the numerical scores, and will not think the context
of credit scores. If the reader recalls the opening of this dissertation, the case of LaShawn was
mentioned: this is why a potential partner writes off LaShawn due to her bad credit score, without
even trying to determine more details about the reasons behind her low credit score.
Admittedly, there is a difference between being offered various possible partners, and
actually choosing one or falling in love. It might be argued that what has changed with algorithmic
dating is only the way we ‘meet’ people (on the basis of contingency, in bars, at parties, and so on,
vs on the basis of AI). But resultant emotions can be affected by the way we choose partners,
because dating application algorithms construct a new experience of love, which forces us to
change our perspectives on ourselves and our intimate interactions with others. Empirical research
has shown that swipe algorithms can create a ‘mindset of rejection’ for people, especially women,
who will tend to reject potential online partners quickly, and will not spend time on exploring

72
See <https://creditscoredating.com/about>

91
further details in the profile (Pronk & Denissen 2020, 388).73 In this sense, the procedure of being
presented choices has direct influence on the emotion of love itself.
Some AI-enabled matchmakers (e.g., Lara and LoveFlutter) directly ‘coach’ users on how
to date properly and successfully, by surveilling user engagement with the application. This not
only makes a decision about who is to be dated, but directly normalizes a user’s perception of love
according to commercial and technical logic. For example, AI dating coaches often suggest that
users meet someone at a bar or a restaurant, which tends to prescribe and reinforce a commodified
dating culture in our society, while restraining individual thinking and exploration of other creative
possibilities (like hiking or jogging, etc.).

3.3 Exploitation and Manipulation of Love Relations

At first glance, ‘smart’ algorithms are like ‘friends’ who help users find potential dates and manage
every detail of their love relations. However, behind the scenes, those algorithms are often
developed and deployed by commercial forces that are not always benevolent. Dating platforms
often translate a user’s romantic experiences into data, and then exploits that data for profit. This
section will investigate how users could be emotionally manipulated and exploited by companies
seeking a profit, and how such exploitation and manipulation can have the potential to negatively
influence intimate interactions. I argue that personal data on romantic relations is necessary for
users to develop open interactions, and if it is exploited and manipulated, it will undermine the
possibility for open exploration.
In early 2021, a lawsuit was filed against Clarifai, the investment group behind OkCupid.
In the lawsuit, it was claimed that Clarifai had gained access to OkCupid’s database “to train its
algorithms used for analyzing images and videos, including for purposes of facial recognition”
(Wu 2021).74 The problem is that OkCupid’s users were not notified about this access, let alone
asked for their consent. Another privacy scandal involved Grindr, a LGBTQ+ dating application,

73
As defined, a rejection mind-set refers to “The continued access to virtually unlimited potential partners makes
people more pessimistic and rejecting” (Pronk & Denissen 2020, 388).
74
Retrieved from <https://www.airoboticslaw.com/blog/machine-learning-data-do-you-really-have-rights-to-use-it>

92
from which a large quantity of sensitive data, including details on users’ HIV statuses and their
most recent testing dates, were shared by Grindr with advertisers (Reberkenny 2022).75
These two cases are typical of concerns about privacy violation. Online dating algorithms
are highly dependent on a large amount of data, including sensitive personal data such as sexual
orientation, religion, HIV status, and swipe history. This heavy reliance on personal data is
worrying when one considers that some data may be obtained by other companies illegally.
However, privacy concerns are just the tip of the iceberg. The more far-reaching issue is what I
have explained in the Introduction the ‘surveillance capitalism’, where corporations and
governments can unilaterally appropriate and exploit people’s everyday experience ‘as free raw
material’ (Zuboff 2019, 8).
Given surveillance capitalism, it is not surprising to learn that OkCupid and Grindr
exploited personal user data to build and exchange their predictive products with their business
customers (namely, advertisers). This is just a consequence of the business model that underlies
the online dating industry. Dating apps keep track of every detail of a user’s romantic life on their
platforms. With the help of algorithms, these experiences are analyzed and translated into
behavioral data for the exploitation of the ‘surplus’: some of this data feeds the application’s own
algorithmic profiling and recommendations to improve their services, but other data are instead
used to make products based on prediction which are then sold to advertisers.
This underlying exploitation of user data can impede people’s intimate interactions. Dating
algorithms can covertly translate romantic experience into behavioral data for exploitation. AI
love coaches, for instance, keep track of every detail of a user’s romantic life, and the extracted
data is often highly sensitive and personal because they reflect the user’s deepest desires, sexual
orientation, and more. All of this information deals with the user’s romantic aims or ideal partners,
which is part of their romantic relations. In the eyes of dating platforms, this data is treated the
same as economic products, by default. For Zuboff (2019), however, personal data is not a
commodity that can be traded arbitrarily, since such data is essentially constituted by human
experience (2019). Roessler explains this point more explicitly, stating that this data was
“supposed to belong to and stay in the sphere of social relations”, and ought not be commercialized,
since it is necessary for users to develop personhood and social relations with others (2015, 149).

75
Retrieved from <https://www.metroweekly.com/2022/05/even-after-grindr-changed-its-data-policy-users-are-still-
being-outed/>

93
If user data is arbitrarily extracted, she worries that people and their social relations with others
can be “manipulated into a certain commercialized” manner:

I am being forced to adopt a view on myself and on my social relation that is motivated not
by friendship but by the market, and therefore, not self-determined, or determined through
the norms of the social context (Roessler 2015, 149).

In light of this, data on human romantic relations should be seen as a constitutive part of love
relations, and thus an important element of our lifeworld. The arbitrary exploitation of that data is
infringing the lifeworld and impeding the possibility for our open interactions and exploration with
others.
Some may argue that online dating companies are not exploiting users because of an equal
exchange structure. That is to say, users sacrifice privacy to find their match online, and platforms
profit from providing efficient recommendations. However, upon closer inspection, the assumed
equal structure for exchange does not exist at all, since dating sites reap structurally more benefits
than users. First, dating algorithms often covertly extract users’ data on romantic experiences
without their consent. Dating applications often claim that their services are purely social,
downplaying their business features. Consequently, most users “do not know they are part of a
commercial transaction” (Lanzing 2019, 143). In Zuboff’s terms, human life experiences are thus
‘unilaterally’ acquired by private companies (2019).
Second, dating algorithms owned by surveillance capitalists often exploit excess data as
‘behavioral surplus’. Zuboff reminds us that surveillance capitalists can ‘unilaterally’ use the
sensitive and personal data “as free raw material” for exploitation (2019, 8). Some of this data
feeds the platform’s own algorithms to improve the services offered to the user, but the remainder
of the gathered data is seen as ‘behavioral surplus’ that can be sold to advertisers for personalized
advertising – as in the case for the previously mentioned OkCupid and Grindr, who exploited users’
data for advertising. The underlying exploitation of romantic life in surveillance capitalism reveals
an asymmetrical power structure between users and surveillance capitalists. As Zuboff states,
surveillance capitalism

94
represents an unprecedented concentration of knowledge and the power that accrues to
such knowledge. They know everything about us, but we know little about them. They
predict our futures, but for the sake of others’ gain… These knowledge asymmetries
introduce wholly new axes of social inequality and injustice (Zuboff & Laidler 2019).

In the online dating industry, there is an asymmetrical power structure between users and dating
platforms. This unequal power structure, where the dating platforms hold the position of power,
ensures the continual exploitation of the user’s romantic life.
Besides exploitation, dating algorithms can also manipulate user behavior to the benefit of
surveillance capitalists. Such manipulation is not rationally influencing us, but also not coercing
us (Susser et al. 2019). Rather, it can generally be seen as a form of hidden influence on our
decision-making process through the modulation of our feelings, emotions and moods.
Such a manipulation is subtle, since Tinder does provide plenty of choices to match up and
help users find a match. Consumers may be unaware of its manipulative potential because it is
interwoven with its potential to empower users to make ‘better’ financial decisions. Despite the
benefits, however, Tinder still has the potential to manipulate individuals’ behavior. This paradox
is similar to the manipulative practices of some health apps identified by Marijn Sax (2021). He
argues that these for-profit health apps are often touted as ‘tools of empowerment’ (345), and can
be said to empower their users by making them more efficiently discipline themselves to enjoy a
healthy life that is seen as fit by themselves. However, these apps can also potentially manipulate
users’ behavior via “targeting and exploitation of people’s desire for health” (ibid., 346). Sax
demonstrates that these apps are presented as tools to “optimize the health of the users, but in
reality they aim to optimize user engagement and, in effect, conversion” (ibid., 345, emphasis in
original). That means that those commercial health apps are designed in a way that they exploit
their users’ “natural desire for health” to make them “spend more time and possibly more money
on health apps” than they may really want to (ibid., 350).
Like Fitbit, Tinder as a for-profit platform can also covertly drive users toward increasing
the platform’s revenue. It has been found that an online dating algorithm can track and manipulate
user behavior, and encourage the purchase of premium services:

95
[The] key is to keep users sufficiently satisfied so they do not abandon the service too
quickly, but not too satisfied so they would be inclined to convert to paying services. This
means that the algorithm needs to dynamically alternate between encouraging users and
restricting them (Courtois & Timmermans 2018, 7).

This example reveals the evils of manipulation: Tinder’s algorithm exploits user vulnerability so
as to “steer his or her decision-making process towards the manipulator’s ends” (Susser et al. 2019,
3).
This manipulation can be more easily identified with AI dating coaches. To reap the
greatest benefit, AI dating coaches do not just track every detail of people’s romantic lives, but
also try to manipulate behavior towards the most profitable results for companies. Emotions play
a crucial part in the coaching process, since emotions are related to one’s motivation to act and
drive toward a goal. An advanced AI love coach can apply facial recognition to efficiently read
the emotions and feelings of users “through text, voice tone, facial expressions, and gestures”
(Alkhaldi 2022).76 By deciphering emotions, AI dating coaches can covertly drive users towards
specific behaviors that are most profitable for companies.
As such, AI coaches have the potential to allow companies to capture and analyze the desire
for intimacy, and manipulate a user’s behavior so that the consumption of certain products is
encouraged. Let’s consider another example. By reading a man’s chat messages exchanged with a
woman, AI learns that she likes butterflies. The AI then guides the man to an online jewelry shop
to buy platinum earrings in the shape of a butterfly. In this example, the woman’s love for
butterflies is only understood according to a commercial logic, as a way for companies or shops to
sell products. It would not suggest that the main could, for example, paint a butterfly for the woman,
or sing a song about butterflies, and so on. Again, I do not mean to argue that commodification of
love is wrong in itself. What is concerning about this commercialization and commodification is
that it is achieved through manipulation, and can potentially restrain people’s thinking of other
possibilities. Those alternative possibilities will be discussed in more detail later on.

76
Retrieved from <https://www.iotforall.com/emotional-ai-are-algorithms-smart-enough-to-decipher-human-
emotions>

96
3.4 Prescribed Identity in a Filter Bubble

In this section, I will argue that dating apps can problematically encourage users to define
themselves, and rank others according to the application’s norms, categories and metrics, thereby
hindering self-expression and intimate interaction. Dating platforms not only amplify the existing
self-commodification process, but also create a new self-representation – an algorithmically
colonized identity. This new identity may not express one’s own self-determination. Instead, it is
the dating algorithms that drive users to more actively participate in a prescribed normalization
process.
The issue of self-commodification has been widely discussed in critical theory. Erich
Fromm (1948, 70) argues that the logic of consumption will make people regard themselves as
consumers and products, which will lead to a kind of self-reification: “A person is not concerned
with his life and happiness, but with becoming saleable.” People are thus alienated from their own
feelings by experiencing themselves only as commodities. Bauman is also concerned about how
self and identity are threatened by the commercialization. For Bauman, in the context of face-to-
face interaction, individuals have an innate moral impulse to not only understand and empathize
with others, but also to take responsibility for others (Bauman 2003, 92). In the context of
consumerism, people would not be able to exercise their moral agency, since their decision-making
is influenced by the logic of commodification and their innate ethical impulse is prevented from
being freely exercised.
The problem with which both Fromm and Bauman are concerned is that people are being
alienated from themselves. People are self-commodified and their romantic interactions with
others are steered by consumerism. They suggest that a human being’s freedom for self-
determination has been hindered by the commercialization of love. In Fromm’s words:

People are motivated by mass suggestion, their aim is producing more and consuming more,
as purposes in themselves. All activities are subordinated to economic goals, means have
become ends; man is an automaton – well fed, well clad. (1995, 243)

The phenomenon of self-commodification can easily be found in online dating apps. For instance,
in a research interview conducted by Hobbs et al., Tim was an experienced Tinder user, and

97
bragged about how he helped a friend ‘sell’ himself (2017, 280). In Tim’s eyes, because his profile
is a product, he needs sales techniques to let other Tinder users ‘buy’ it. Like Tim, Alice is a Tinder
user who shared her experience about how to present oneself successfully on Tinder: “you try and
pick the best photos of you… we’ve all got this idea of ourselves and it is marketing” (2017, 281).
Both Tim and Alice apply the business logic of ‘buying’ and ‘marketing’ to commodify their own
profiles, which represents a type of self-commodification.
Besides self-commodification, dating apps actively shape user identity through the
technical norms prescribed by dating algorithms. In the previous section, I discussed why swiping
algorithms are intentionally designed in a game-like way: “Swipe left, swipe right, ‘It’s a Match!’
After matching the app prompts users to choose between sending the match a message, or ‘keep
playing.’ Like a game” (Seidel 2015).77 78 By gamification, the dating experience is reduced to a
must-win game that is competitive, playful, and addictive. To win the dating game in finding a
suitable match, users have to adapt themselves to a system of norms that is prescribed by dating
algorithms. In algorithmically constructed dating, it is the algorithms that decide which profiles
are more visible to other daters. So, in order to stand out, users have to know how to make the
most use of the algorithms. They have to learn how an algorithm works and how to let the
algorithms help them play the dating game efficiently. Knowledge about the algorithms is vital.
But algorithms are often hidden and invisible, and by incorporating more machine learning, these
algorithms become rather obscure even for their own designers (Pasquale 2015, 59). As such, the
algorithmic imaginary is crucial here.
As mentioned in Chapter 2, the term ‘algorithmic imaginary’ is used by Bucher (2017, 30)
to show how the imagination of a social media user can change their behavior according to their
perceived algorithmic functions. For online dating applications, users can also exercise the
algorithmic imaginary to win their dating games in order to, for instance, get the most responses

77
Retrieved from <https://medium.com/@jane_seidel/the-game-of-tinder-3c3ad575623f>
78
Tinder has also launched some actual games, such as Swipe Night, which basically turns the dating experience into
an adventure game. Tinder’s application offered the free game to users who answered some moral dilemmas about
how they would prepare for the end of the world. After that, users would receive “a list of singles who made similar
choices”: “Daters who swipe right on each other will be able to match and have a conversation. A left swipe means
that person will have to find someone else to survive the apocalypse with” (Newcomb 2019). Retrieved from
<https://fortune.com/2019/09/20/tinder-swipe-night-dating-game/>

98
or matches. The imaginary can be acquired from one’s own experience with the algorithms, or the
sharing of strategies used by people who have gotten many matches online. With the algorithmic
imaginary, users know when to start swiping, how to select best photos, and even what types of
pets they should pose with. In a nutshell, the algorithmic imaginary provides users the de facto
norms to follow, informing them how to make the full use of algorithms to win their dating games.
This imaginary makes the normalization process on dating apps more active and participatory.
Nevertheless, these norms are prescribed by dating algorithms, which follow an imperative
of systemic logic which can sometimes outweigh individuals’ self-identification. By design and
function, algorithms often classify users into different categories. A category can make users more
easily able to identify themselves, so that they can express themselves more efficiently. But
categories, which have already assumed an existing set of judgments, can constrain people’s
thinking and encourage them to more easily accept prescribed classifications set up by designers
(Crawford 2021, 125). As a result, algorithms can sort users into certain categories that only truly
exist for the benefit of the dating applications themselves, to analyze data and match people more
efficiently – rather than to let users freely express their identities. For instance, many studies have
criticized the binary gender category integral to Tinder, Bumble, and other dating applications.
This binary classification is partly due to the efficiency and usefulness that online dating
applications focus on: “gender within the apps is not about identity as such but rather is a way of
sorting users into groups that make matches more likely” (MacLeod & McArthur 2019, 831).
Consequently, classified only as male or female, users from gender minorities cannot freely
identify themselves, and often feel distressed when presenting themselves on dating platforms.
Some may hold that such classification and identification is common for dating culture in
general, where discursive norms constrain users’ abilities to define, present, and encounter others.
Real-life dating can also be seen as a ‘game’, where people try their best to increase their chances
of success on dates, and are also encouraged to learn and follow some social norms on what to
wear, how to flirt, and how to apply makeup, etc. So, how do dating applications shape
relationships and self-definition in a different way from dating more generally? As we shall see,
dating applications follow a very different logic.
The dating application algorithms embody those norms that are not formed through
interactions, but are rather prescribed by the designers in an automatic manner. In real-life dating,
the rules about how to flirt have to be learned and practiced through intimate interactions. In those

99
interactions, there are failures and uncertainties, which are natural facets of an open exploration.
But for ‘smart’ dating algorithms, the norms of how to flirt can be very distinct. In the context of
AI dating coaches, for instance, AI helps one party send a flirty text to a potential partner, even if
that party does not even know the meaning of the text. The text in this sense is not derived from
his or her own reasoning, passion or desire, but is instead no more than an instruction or
prescription given by an automated AI coach. It would appear that the AI knows more about what
constitutes a good, personalized flirty text, since it can keep tracking every detail of the interaction
shared by both users in a chat – not to mention learning about personality, mood, needs, and
weaknesses. So, without any mutual interaction, the AI can write a flirty text that can either be
sent out by the user, or it can just automatically send the message to the potential partner at a
suitable time. In this light, dating norms that are perceived to be contextual and interactive have
been transformed into an automatic process.
Furthermore, the norms prescribed by dating algorithms tend to trap people in a ‘filter
bubble’. Dating application algorithms learn personal preferences and silently recommend
possible partners based only on similar characteristics, and without the users knowing it (Nader
2020, 237). The dating application OkCupid, for instance, features the basic idea that the more two
users correspond in answering personality questions, the higher the possibility that a romantic
match would be successful. As such, a good match actually implies that two people are more likely
to give similar answers to a questionnaire, which shows that they have similar personality traits.
Tinder’s swipe algorithm also recommends and matches people who share similar ELO
(‘desirability’) scores, which means they are similarly attractive in physical appearance.
This particular sort of matchmaking creates what Parisi and Comunello (2020, 66) call a
‘relational filter bubble’, where users only date someone who is ‘similar’ to them. In this way,
dating algorithms construct a new but compartmentalized environment, where people can only see
through their personalized world of knowledge, and experience intimate interaction only with
those who share similar traits. Unconsciously, users are gradually trapped in a conditioned space
where they passively follow their pre-existing preferences and are prevented from actively and
fully expressing their identities and developing free and respectful interactions. This relational
filter bubble is even more problematic in an age where it has become increasingly important for
people to express themselves differently on digital platforms. As Jim Kozubek (2014) points out,
a user is not merely a flat profile and is not simply ‘made of steady data points’; instead, she can

100
have several different profiles and identities on social media, and her self-presentation can be
contradictory and inconsistent.79

3.5 Rethinking Autonomy in the Lifeworld

Until now, I have examined how dating algorithms run the risk of colonizing love relations from
cultural, social, and personal perspectives. It has been shown that dating algorithms have the
potential to hinder and distort intimate interactions with others in a particular way, and restrict the
open exploration of possibilities. To regain open interactions and exploration, we need to revisit
the active and transformative value of the lifeworld. As was argued in Chapter 1, we do not need
to share Habermas’s historical understanding of the lifeworld. Instead, the concept of the lifeworld
is helpful because it constitutes some dialogical and transformative elements that enable us to think
about how it is possible to resist and even reverse the algorithmic colonization of love. In this
section, I will argue that the notion of the Habermasian lifeworld constitutes autonomy as an
important value that is useful in better understanding the problem of algorithmic colonization. It
may be pointed out that it is somewhat counter-intuitive to use an individualistic notion of
autonomy to analyze the essentially interactive mode of love. However, such a notion of autonomy
is not actually individualistic, but rather relational and interactive. To better understand this point,
as well as what is wrong with algorithmic colonization, we have to take a closer look at the concept
of autonomy.

3.5.1 Relational Autonomy

The concept of autonomy is complex, constituting a series of tensions (Roessler 2021, 1). It is hard
to give a universalized definition partly because the notion of autonomy is related to individual
freedom, which changes historically.80 Since this study is not going to examine autonomy closely,

79
Retrieved from <https://www.theatlantic.com/technology/archive/2014/09/love-is-not-algorithmic/380688/>
80
As an example of such historical change, Axel Honneth gives a chronological sequence: Hobbes’s negative freedom,
Kant’s reflexive freedom, and Hegel’s social freedom (Honneth 2014). Even if autonomy is roughly seen in a way

101
I will only follow a generally accepted idea about autonomy, so as to observe some of its main
characteristics. A typical concept of autonomy can be defined as follows:

Autonomy has come to be understood in this literature as competence in reflection and


decision making and (on some views) authenticity of values, desires, and so on that
constitute the person and motivate choice. (Christman 2004, 148)

This concept emphasizes a core feature of autonomy: a person’s making choices is not arbitrary or
automatic (e.g., due to secondary-order desires), but intended and deliberate. This reflective
manner is crucial for autonomy, as it ensures individuals’ choices are theirs, and that they are not
being manipulated or coerced by others. The deliberational aspect of autonomy will be addressed
more fully later. At present, I will examine the relational character of autonomy.
When speaking of autonomy, people may tend to regard it as a sort of isolation from the
social environment. Mark Bevir writes that autonomous subjects “would be able, at least in
principle, to have experiences, to reason, to adopt beliefs, and to act, outside of all social
contexts… [They] could found and rule themselves uninfluenced by others” (Bevir 1999, 67). This
is a traditional individualistic notion of autonomy which is, as Wendy Brown suggests, “self-reliant”
and “unrelated to the institution of the family” (Brown 1995, 135).
However, this individualistic and atomistic idea of autonomy is rather contested nowadays,
because that conception assumes an idealized selfhood that does not interfere with others, which
ignores the fact that autonomy can only develop and function in a social context. First, autonomy
can only develop within social relations: selfhood and identity are intersubjective (Mackenzie &
Stoljar 2000, 4). Humans are fundamentally vulnerable, and they need to live with others, and are
dependent on them. It is only within social connections, especially constituting close relationships,
that dependent children can grow up and develop their self-confidence and self-respect. Only in
this environment can they gradually “understand themselves as being autonomous persons”
(Roessler 2002, 148). Autonomy can therefore only be developed relationally, and it is impossible
for people to live in a self-isolated manner.

that is identified with freedom, they are not the same thing. Modern notions of freedom, in a negative sense, generally
refer to people’s freely choosing to live as they wish; however, the notion ‘autonomy’ is particularly used to describe
one’s competence for self-determination, rather than merely providing available options (Roessler 2002, 144).

102
Second, autonomy depends on the existence of an enabling social context. The core
element of autonomy is self-determination, but this self-determination is not about people being
inaccessible to others. Rather, it is a self-determination over the boundary of how much a person
is accessed by others. The exercise of autonomy is always relational, in that what people think and
want, and how they act, is the result of an engagement in “intersubjective exchange with others”
(Roessler 2002, 145). This relationality can help people develop their sense of autonomy, but
repressive socialization can also harm a human’s autonomy by damaging their self-esteem and
self-confidence. So, being autonomous means that people themselves are able to decide on what
socialization they will be involved in, and how to develop their social relations with others
(Roessler 2002). This decision-based exercise of self-determination only makes sense in a social
context.
To sum up, autonomy is relational: “persons are socially embedded” and “agents’ identities
are formed [and shaped] within the context of social relationships” (Mackenzie & Stoljar 2000, 4).
Self-determination is not arbitrary or automatic, but a deliberative process which ensures
individuals’ choices are authentically theirs, not being manipulated by others.

3.5.2 A Dialogical Account of Autonomy

This section argues that Habermas’s notion of the lifeworld constitutively involves a particular
kind of relational autonomy – dialogical autonomy – as an important value, which can be useful
to analyze algorithmic love.
In capitalist societies, market power as a structural obstacle can impede individuals’
autonomy, as such obstacles can distort people’s self-determination, making them alienated from
themselves, and only able to conform to instrumental rules. For critical theorists, the deliberateness
is the core feature of autonomy, which ensures that an individual’s choices are not being
manipulated by others. Habermas, in alignment with Adorno, Marcuse, Fromm and Bauman,
agrees that deliberation and reflexive beings are a crucial matter to consider at this junction.
However, Habermas holds a different view from the others regarding what kind of deliberateness
we need.

103
Unlike his Frankfurt predecessors, Habermas does not presume a what Kymlicka calls
‘Marxist perfectionism’ (2002, 216). Perfectionism, generally speaking, is a theory advancing an
objective account of the good for human well-being (Hurka 1993; Dorsey 2010). Honneth notes
that Marxist perfectionism “dictates the pursuits in which people are to find their self-realization”.
He goes on:

Instead of leaving subjects free to decide how they wish to pursue their happiness under
conditions of autonomy, this perfectionism imposes from above the stipulation that it is
only if all members carry out meaningful, non-alienated labor that a society is free and just
(Honneth 2007, 359).

As for the notion of reification, Lukacs, Adorno and Marcuse seem to presuppose the possibility
of a social reality not dominated by instrumental reason, and the diagnosis that instrumental logic
necessarily distorts reality in our society. So, the only way to reveal the truth and maintain social
reality seems to be an overcoming of instrumental rationality. So, if social conditions are not
satisfied as defined, people are by definition in danger of losing their freedom.81
However, Habermas’s colonization thesis is subtly different. It is true that Habermas is also
deeply worried about the dominance of instrumental reason in modern society, but he does not
believe that the lifeworld has been already totally dominated by instrumental rules. As was argued
in Chapter 1, Habermas’s colonization thesis does not mean that the lifeworld should not be
influenced by the system, or that the systemic logic should not be extended into the lifeworld.
Rather, he points out that colonization only takes place when the systemic logic intrudes on the
lifeworld in an uncontrolled and illegitimate manner. Before that, there is always a tension between

81
This perfectionism can be linked to a perfectionist type of autonomy. As Marina Oshana explains with reference to
slavery: “[being] a slave means that how he shall live is no longer up to him” (Oshana 1998, 87). Even if the slave is
independently and authentically content with being slave, he or she is still seen as non-autonomous, simply because
the state of being a slave already implies his or her lack of autonomy. This perfectionism of autonomy means that
self-determination is not sufficient; autonomy is only possible “when social conditions surrounding an individual (are)
up to certain standards” (Christman 2004, 150). People are by definition non-autonomous, if predefined substantial
social conditions are not satisfied. But this concept of autonomy has been criticized as being too ideal. Following these
strict requirements, most people are not autonomous, considering that structural hinderances and distortions are
structuring all societies (Roessler 2021, 23-24).

104
the lifeworld and the system in the real world, where the communicative actions of the lifeworld
can resist society’s tendency towards instrumentalization. As such, the colonization of the
lifeworld only happens when communicative interactions are undermined, and in which people are
not able to resist the tenacious expansion of systemic logic by thinking of other possibilities.
According to Joel Anderson, Habermas’s communication theory suggests an ‘ethical-
existential autonomy’ which is intersubjective and dialogical (2017; 2019). This form of autonomy
does not prescribe ‘what all rational agents must want’, but rather it emphasizes “what she just
finds herself caring about and what sort of person she herself wants to be” (2017, 97). It is often
difficult to be sure of what one really cares about, the process of which is also open for mistakes
and corrections. We may find that we were wrong about what we really want, not just because we
change our ideas over time, but that we correct ourselves in the interaction with others. People
have the capacity to engage in such “an open-minded give-and-take about what one really cares
about and finds important and tries to make sense of one’s personal commitments and values”
(2017, 97). People are autonomous when they are able to engage in such dialogue and
communication, which is “the room for progress in understanding what one really cares about”
(2019, 19).
In this light, Habermas’s dialogical notion of autonomy is a more realistic and dynamic
than perfectionistic one. As characterized by Roessler, “autonomy has often to be acquired under
conditions that probably have to be called imperfect with respect to justice and equality” (Roessler
2002, 145). In our everyday lives, “contingencies, obligations, psychological inabilities, and
structural obstacles” may often impede our self-determination (Roessler 2021, 5). For instance, as
is often mentioned, we may voluntarily share intimate information on dating platforms to find
potential partners, but there emerges a structural problem of surveillance capitalism, which can
exploit our personal data and manipulate our intimate relations with others (Zuboff 2019, 376).
Nevertheless, all these imperfections of our self-determination in the real world do not
necessarily threaten our autonomy. On the contrary, they are “constitutive of our ability to shape
ourselves and the world and adopt them as our own” (Roessler 2021, 5-6). Unlike a perfectionist
notion of autonomy, a realistic view of autonomy does not assume that “subjects are only
autonomous when they decide in favor of certain good and reasonable options” (2021, 16). People
can still be autonomous even if they make a wrong and morally bad decision. As Roessler shows,

105
[A]utonomy is defensible on the basis of the idea that even autonomous decisions must
remain open to criticism – not because others necessarily always know better what is good
for us, but precisely because as reflexive beings, we are aware that in every individual case,
our reflections may be skewed or distorted and must be examined more closely (Roessler
2021, 17, emphasis in original).

Because of its recognition of imperfection, the realistic sense of autonomy argues against any form
of perfectionism (about autonomy) which often “depends on the assumption that people can make
mistakes about the value of their activities” (Kymlicka 2002, 214). As people who tend to make
mistakes are seen as weak, for the societal good, governments and companies have a reason to
discipline or nudge people in a way to make them avoid making mistakes. In contrast, the realistic
form of autonomy shows that to be autonomous person, people should be allowed “to form,
examine, and revise (their) beliefs” (Kymlicka 2002, 217). Kymlicka also states that “leading a
good life is different from leading the life we currently believe to be good” (2002, 214). In this
sense, deliberation not only means that we always lead our lives according to what we believe to
be good, but that we also have the freedom to question the beliefs we hold:

Deliberation, then, does not only take the form of asking which course of action maximizes
a particular value that is held unquestioned. We also question, and worry about, whether
that value is really worth pursuing (Kymlicka 2002, 215).

All in all, Habermas’s dialogical notion of autonomy focuses on people’s capacity to make their
own choices in a communicative discourse. These choices are not prescribed by instrumental rules,
but neither are they predefined by a set notion of moral goodness or rightness. People should be
allowed to engage in a process of interaction to find what they care about, even if they may make
mistakes or be ambivalent during that very process.

3.6 Retaining Dialogical Autonomy in Algorithmic Love

106
Now, armed with the dialogical notion of autonomy, we can go back and analyze the concept of
algorithmic love. When love relations are mediated by dating algorithms, there is the risk of
colonizing love relationships. I will argue that the algorithmic colonization of love is wrong, not
because it infringes on some idealized standard of romantic love, but rather because dating
algorithms undermine people’s dialogical autonomy, making them unable to interact with others
openly and think of other possibilities. I thereby propose to retain such a dialogical autonomy with
respect to algorithmic love, which means that individuals should be provided room to form and
revise their choices while interacting with others in a dialogical process.
In the case of commercialization of love, Adorno, Marcuse, Fromm and Bauman have
already assumed that the criterion of true love is a romantic love untouched by instrumental and
commercial reason. This romantic love is a traditional mode of love, which emphasizes some long-
believed virtues of love, such as eternality and passion. For example, in William Shakespeare’s
Sonnet 116, the opening lines are about what a traditionally romantic love resembles:

Let me not to the marriage of true minds


Admit impediments. Love is not love
Which alters when it alteration finds,
Or bends with the remover to remove…
(Pooler 1918, 111)82

Shakespeare is suggesting that true love is derived from a constant passion and affection through
all difficulties, and it will not change easily when faced with pressures, hardships or enticements.83
The sonnet points out that love is essentially based on unpredictable risks, and that love is love
only because people are willing to embrace future risks with their loved ones.
More specifically, Sergio Costa (2005) defined romantic love across five dimensions. For
the dimension of emotion, romantic love is “a bond with the other that knows no more ardent
desire than the yearning to lead one’s own life in the body of the loved one” (Costa 2005, 3).

82
Source: C. Pooler’s edited book The Works of Shakespeare: Sonnets (1918). See the digital book online
<https://archive.org/details/sonnetseditedbyc00shakuoft/page/110/mode/2up>
83
As analyzed by Honneth, Shakespeare’s sonnets are “the first literary testaments to the gradual transformation in
cultural attitudes about marriage and love” (Honneth 2014, 233).

107
Romantic love is also an idealization that the individual fully recognizes the other’s singularity, so
that it becomes “a synthesis of spiritual and sensual ideals of love” (ibid.). From the perspective
of a relationship, romantic love implies a fusion of marriage and love which is finally oriented
towards building a family. From a cultural perspective, romantic love refers to rituals and actions
through which romantic emotions are mediated. Romantic love also means a kind of special social
interaction that “stands out from the anonymous social context”, and shares communicative
symbols understood only between the couple, and not by outsiders (ibid.).
Those romanticizing descriptions of love may not only be somewhat too ideal for people
nowadays, but they represent a strongly stated perfectionism of autonomy regarding love.
Following this line, people are only autonomous in love relations when some standard of romantic
love has been satisfied. For Bauman and Fromm, it is assumed that under conditions of
commercialization, people’s autonomy will be by definition undermined, so romantic love has to
be maintained to its own objective value. In other words, the values of romantic love are valid for
persons independent of their own judgment of those values.
A reformulated Habermasian colonization thesis, however, can provide us with a different
critique of the commodification of love. It is true that Habermas is also concerned with the
dominance of instrumental reason in modern society. We can even reasonably imagine that, like
Fromm and Bauman, Habermas would also criticize the commercialization of romantic relations
undermining subjectivity and causing social pathologies. However, Habermas’s colonization
thesis is more complex than simply drawing a clear-cut line between true love and
commercialization, romantic relations and the market. Unlike Fromm and Bauman,
commercialized love may not be immediately wrong, since the commercialization process would
not necessarily undermine dialogical autonomy in a gradual process. In some commercialization
processes, love relations would not be immediately become shallow or superficial; and self and
identity would not be bypassed at once. Sometimes, the market and commercialization are even
beneficial for people to improve their romantic relations and identity (Illouz 1997, 13).
What Habermas would really be concerned is when such a commercialization process
hinders dialogical autonomy in the sense that people’s open and respectful interactions and
exploration become largely limited. Only in this sense does colonization happen. Love becomes
colonized insomuch as people are normalized to such an extent that they are not able to think and
imagine as equals over other possibilities of love. For Habermas, people should be allowed to find

108
specific meanings of intimacy through autonomous and respectful interactions with others. But
when colonization happens, when dialogical autonomy is undermined, this process of interaction
will be distorted and even crowded out. This is a situation in which people are not able to freely
share concerns and actions with others.
In short, the Habermasian colonization thesis can provide a more open critique of
algorithmic love. On the one hand, it criticizes that idea that the dominance of algorithmic
rationality can threaten love relations. On the other, it does not prescribe what love is like, and
does not build up a set of moral standards for love. Human love relations are threatened not because
some romanticizing ideal of love has been infringed upon, but because the communicative
infrastructure of interactions has been damaged. What the algorithmic colonization thesis
highlights is an open-minded interaction that ensures that subjects think and choose different
options, and that such an interactive process should not be prescribed and manipulated by
commercial and technical logic.
To reverse the algorithmic colonization of love, we therefore need to retain the dialogical
autonomy in love relations. Specifically, the retention of autonomy has at least two dimensions in
practice.
First, from an algorithmic perspective, the algorithm designers need to ensure that subjects
can engage in an open-minded environment, where people can openly interact with their partners
without hidden exploitation or manipulation. The design of algorithms should encourage a greater
number of alternative possibilities for people to choose, rather than deciding an intimate relation
based only on a single attribute like one’s credit score, physical appearance, or star sign. As for AI
love coaches, algorithms should be designed in such a way that not many automatic suggestions
or links (about what to do next, which bars should be visited, and so on) pop up in chat rooms, so
that users can focus on real communication, spending more time listening to each other than to
automated recommendations. Furthermore, as a feature of design, AI love coaches should make
their automatic suggestions in a more creative and diversified manner, rather than always nudging
people in a single direction for commercial consumption. The AI coaches can incorporate, for
example, some eco-friendly dating ideas (such as hiking or jogging), or cooking healthy meals
together.
Second, from the perspective of users, we should raise public awareness about the risks of
algorithmic colonization, as well as the significance of proper (mutual) understanding in the

109
algorithmic society. We can add educational programs to the applications to not only help people
(especially young people) learn about the potential risks of the algorithmic colonization, but also
educate them to be more understandable towards some of the algorithmic decisions that are
negatively targeted towards others. Individuals need to be allowed to recognize that other people
can make mistakes, and that understanding towards others’ poor performance is judged by
algorithms. As in LaShawn’s case, if a man finds that LaShawn is perfect for him, that says a lot
in itself; it means that she has many characteristics that he loves. Her poor credit score should not
necessarily become the deal-breaker, especially when one considers that a potential partner might
not even try to ask for more details about the reason for her low score. We should encourage people
to think more about the context behind a numerical score, and encourage them to consider whether
a low score might be caused by some uncontrollable factor. As in the example of
CreditScoreDating, it needs to be suggested to users that people with low credit scores are worthy
of understanding and forgiveness, rather than a categorical rejection. With regard to Tinder’s
swiping algorithm, users should be encouraged to explore more details about the people behind
‘plain-looking’ profile pictures, rather than focusing rejections on appearance alone.
Admittedly, these suggestions are very general and face many practical difficulties before
being put into practice. They are meant to show that algorithms can and should be designed in a
way that makes our romantic interactions more open and provides us with more possibilities.

3.7 Conclusion

Dating algorithms can be beneficial to many of us, making our dating experience more interesting
and more efficient. However, we should also be very careful about how algorithms can make our
interactive experience of love become a non-negotiable, automated process. This chapter argued
that dating algorithms have the potential to hinder and distort our open explorations and intimate
interactions with others. Habermas may be right to remind us that we should not give up our
communicative actions too easily in the lifeworld. Even in an algorithmic society, people should
be allowed to explore the specific meaning of love through open-minded interaction with others,
instead of having that meaning prescribed and manipulated by dating application algorithms. As
Fromm suggests, for mankind the “economic machine must serve him, rather than he serve(s) it”

110
(Fromm 1995, 243). This is also true for dating platforms: algorithms must serve our love life,
rather than supersede it.

111
112
Chapter 4: The Algorithmic Colonization of Trust84

The dialogical autonomy – people’s open-minded and respectful interactions and exploration – is
embedded not only in intimate relations like love, but also in broader contexts of social relations.
This is where trust begins to play a role, because these social contexts are often dependent, and
rely on trust. Like love, trust is an essential element of the lifeworld. Without trust, our society
will disintegrate and collapse (Simmel 1978; Putnam 1997; Fukuyama 1996). This chapter will
critically consider and evaluate how trust relations can be algorithmically colonized in the Big
Data society. To do this, I will focus on a particular case study: China’s Social Credit System
(SCS). One of the stated main goals of this enormous project that was announced by the Chinese
government, is to raise ‘trust’ in the entire society. Much ink has been spilled over the system’s
legal and political framework, regarding it as a broad strategy of algorithmic governance and
investigating the perils of its unfairness and privacy intrusion. However, few ethical questions have
been asked about how the algorithmic power of the system changes people’s understanding of trust,
and their trust relations with others. In this chapter, I will offer an account of SCS based on the
framework of algorithmic colonization, identifying how algorithms can potentially undermine
humans’ dialogical autonomy in trust relations.85

84
This chapter is based on three articles of mine: 1) Trust as an Ideology? Algorithmic Society and The Manipulative
Nature of China’s Social Credit System (To be submitted); 2) Automating Trust: How Algorithms Shape Trust
Relations in the Big Data Society (To be submitted). 3) Transparency as Manipulation? Uncovering the Disciplinary
Power of Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25. https://doi.org/10.1007/s13347-022-
00564-w
85
I thank Daniel Loick for pointing out the problem of directly applying Habermas’s colonization thesis to the Chinese
context. Habermas’s theory of modernity is based on a historical analysis of the European context specifically, which
is based on “the principle of subject rationality and the individual priority to the society” (Madsen 2003, 229). Even
Habermas himself admits that Western rationality traced back to Plato and Aristotle is different from the collectivism
influenced by Confucianism (Habermas 2001, 149). In this light, applying the colonization thesis to the Chinese
context might be criticized as a sort of Western-centrism (Yang 2022, 6, 97). But then again, I will not adopt
Habermas’s historical notion of colonization, but only consider it in a normative way. As I argue, the value of open-
minded and respectful interaction and exploration, as rooted in communicative action, has a universalist potential
without necessarily leading to a specifiable set of universalist moral or legal norms.

113
In this chapter, my central argument is that algorithms can drive out the interactive
component of trust, and reduce participatory trust to objective reliance and social compliance. In
this process of reduction, moral and interactive trust is objectified and de-moralized into a kind of
calculation and a form of social control. Plainly aware of this fact, the Chinese government
deliberately re-moralizes algorithmic trust by using trust propaganda. This process of re-
moralization, I argue, does not only hide the real operation of algorithmic power over citizens, but
also makes people internalize the power of control as moral goodness. In this sense, algorithmic
trust becomes an ideology, which intentionally hides its power of control so that people do not
challenge it. I suggest that we need to rediscover the value of distrust in order to counter the
ideology of algorithmic trust and make our interactions more open and less narrow-minded.
This chapter is structured in six sections. It starts with an introduction to China’s SCS
project, reconsidering the SCS not as a mere instrument for totalitarian surveillance as commonly
thought, but rather a way of engineering trust through data and algorithms across the entire society.
Section two shows that this engineering of trust, which is not restricted solely to China’s SCS plan,
represents a wider phenomenon of building algorithm-based automated trust in the Big Data era.
After that, sections three, four and five will critically examine how the algorithms of social credit
scoring can have the potential to constrain open-minded and respectful interactions with others to
build trust relations in cultural, social and personal domains. Section three shows how scoring
algorithms reduce the interactive process of trust to an objective reliance, which may result in a
culture of objectification and excessive punishment. Section four analyzes how such algorithms
can hinder public interactions between citizens and governments, which reduces trust to a mere
tool of social control. Section five will show how the Chinese government’s ideological
propaganda will influence citizens’ algorithmic imaginary, which can constrain thinking about
other possibilities. The final section of the chapter will explore the value of distrust as an
alternative mode of thinking, enabling people to resist the algorithmic colonization of trust.

4.1 Rethinking China’s Social Credit System: Engineering Trust

114
4.1.1 What is the Real Social Credit System?

In 2014, the Chinese authorities released a national policy document titled the ‘Planning Outline
for the Construction of a Social Credit System (2014-2020)’.86 This document points out that many
current social problems in China (including corruption, food safety, environmental pollution, and
so on) are rooted in a lack of trust pervasive across the entire society. Therefore, the goal of the
SCS is to rebuild social trust through a holistic data-driven assessment of trustworthiness for all
individuals, firms, branches of government, and courts in China. With the SCS, society will be
ensured that “the trustworthy benefit at every turn and the untrustworthy can’t move an inch.”87
This is a large-scale project. However, one of the most pervasive misconceptions about the
SCS is that the Chinese government will assign every citizen a single social credit score to rate
and determine every aspect of their social and political life (Hvistendahl 2017; Botsman 2017). A
typically misconceived account of the SCS is quoted below.

[The SCS] is a system where all behaviors are rated as either positive or negative and
distilled into a single number, according to rules set by the government. That would create
your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus,
your rating would be publicly ranked against that of the entire population and used to
determine your eligibility for a mortgage or a job, where your children can go to school –
or even just your chances of getting a date (Botsman 2017, 150).88

This description is too simplistic to be true. It may be correct in the sense that the SCS attempts to
deploy big data analytics and algorithms to characterize one’s financial and social lives. But at
present, the designers of the SCS have not yet developed a unified social score for citizens. Instead,

86
It is also called the ‘Guidelines of Social Credit System Construction (2014-2020)’. The original official document
of this Outline (Chinese version) can be found on China’s government website at
<http://www.gov.cn/zhengce/content/2014-06/27/content_8913.htm>. An English translation of the document can be
found at <https://www.chinalawtranslate.com/en/socialcreditsystem/ >.
87
Ibid.
88
Also see the article published in Wired: https://www.wired.co.uk/article/chinese-government-social-credit-score-
privacy-invasion

115
considering the current state of affairs, the SCS has been designed in a far more flexible way, and
constitutes multiple systems (Liu 2019; Ahmed 2019; Drinhausen & Brussee 2021). The State
Council, the highest authority involved in the project, draws up the blueprint of the SCS, under
which local authorities and technology firms are encouraged to explore and experiment with their
own differing systems of social credit system.89 As of 2021, the SCS has established a “general
framework and key mechanisms” throughout the country (Drinhausen & Brussee 2021, 3).
Three principal models of the social credit system have been rolled out. The first one is the
Blacklist/Redlist System, which is mostly implemented by the central, as well as local
governments, at a national level.90 Unlike other models, the Blacklist/Redlist System is an almost
unified, China-wide system. A recent Trivium China study shows that the function of China’s
central government in the SCS project is not about assigning scores, but rather building a central
database to record and share data, integrating a wide range of data culled from all sources of
government. 91 Utilizing this database, local authorities and the general public are able to
continually monitor and rate the trustworthiness of individuals and companies (ibid.). The
Blacklist/Redlist System is expected to “be fully integrated with the social credit score” in the
future (Donnelly 2021). 92 Currently, however, the central database is only utilized to assign
evaluations to individuals and companies. All these lists are published on Credit China, a
nationwide online platform launched in 2015.93
The Blacklist/Redlist System features different categorizations according to function. The
most common category is the ‘Defaulters Blacklist’, which draws from court judgments. If

89
The State Council is “the most powerful administrative body within the Chinese government” (Donnelly, 2021). On
a more practical level, it is the National Development and reform Commission (NDRC) that helps the State Council
plan and implement the SCS project in detail.
90
Contrary to punishment-based blacklisting, people whose names appear on redlists are often rated as trustworthy,
and will thus rewarded with various benefits. It seems that blacklisting is far more frequent than redlisting, even if the
latter has recently been encouraged by governments.
91
Source: https://socialcredit.triviumchina.com/wp-content/uploads/2019/09/Understanding-Chinas-Social-Credit-
System-Trivium-China-20190923.pdf
92
See <https://nhglobalpartners.com/china-social-credit-system-explained/>
93
There are different local platforms where blacklists and redlists are publicized, such as Credit Beijing, Credit
Shenzheng, and so on. Nowadays, nearly all cities have their own local websites to publish lists as well as news and
policies about the social credit system.

116
someone’s name is featured on this blacklist, it means that that person failed to repay a debt, even
after a court had ruled that they were obligated to. This particular blacklist has been widely utilized
throughout the country. Between 2013 and 2020, more than 157 million people were blacklisted.94
The blacklisted defaulters are often called laolai in Chinese, a derogatory word used to describe
dishonest persons who do not pay back loans. These laolai face a powerful ‘joint sanction’, in that
authorities in different fields implement a wide range of punishments. Normally, these defaulters
would be punished by preventing their ability to purchase first-class train and flight tickets, staying
in luxury hotels, or their Internet use.95 Sometimes, in some provinces, being labelled laolai would
even influence whether their children could attend a private school, or be admitted to a college.
The second model is the commercial credit scoring systems built by Chinese private
technology firms. A typical example is the Sesame Credit Score, owned by Alibaba. The idea that
the SCS has a unified scoring system is partly due to the conflation of the SCS with commercial
scores, such the Sesame Score.96 In 2015, Alibaba’s Ant Financial was granted a trial license to
establish a commercial credit scoring system. 97 Its Sesame Credit Score was launched only a
month after acquiring the license, and almost immediately became popular in China. The Sesame
Score is built on an algorithm which is similar to the FICO Score in the United States, in which
five data points are calculated. A major difference, however, is that the Sesame Score not only
calculates consumers’ financial ability, but also their social connections and personal character.
That means that an honest and responsible person may be scored high, but that score can be
negatively affected if one of their friends ranks poorly.
Unlike the Blacklist/Redlist System, the Sesame Credit Score is an incentive-oriented
system that relies on a host of reward mechanisms. With a high Sesame Score, consumers can
acquire many social benefits, such as deposit-free rental of bicycles and houses. As the scores
become incorporated into some of the larger dating platforms, people with good scores may be

94
See <https://www.36kr.com/p/1069365711032195>
95
In this sense, the ‘joint sanction’ does not restrict citizens from using trains in general, but only the luxury variants.
This reality is different from how some Western media outlets exaggeratedly describe the SCS.
96
Another reason for the conflation is the ID number, which is assigned to business managers in China. With the code,
people can easily scan and check whether companies are trustworthy or not. This code is only assigned to a legal
person or companies, rather than private citizens.
97
At that time, eight private commercial companies were granted licenses, including another tech giant Tencent.
Among the proposals, Alibaba’s Sesame Credit Score was the most influential one.

117
perceived as more ‘attractive’ to potential partners. The Sesame Score is in continual development,
and many more economic and social benefits are expected to be added to the system in the future.
It is notable that, in 2017, several large tech firms (including Alibaba and Tencent) were excluded
from conducting individual credit scoring “for fear of unanticipated side-effects and a potential
backlash” (Drinhausen & Brussee 2021, 19).98 However, these firms’ scoring systems can be still
applied within their commercial realms, and the central government also encourages these
companies to help local authorities build various quantified City Scores.99
The City Score is the third model, implemented by municipal governments. According to
the 2014 Outline document, local provinces and cities have been motivated by the central
government to trial their own rating systems for residents. As of 2021, it is estimated that “80
percent of provinces, regions and cities” have conducted (or have planned to conduct) some
version of a social credit system (Donnelly 2021; see also Reilly et al. 2021).100 Most of these
municipal authorities have simply built their own web portals, similar to those of Credit Shanghai
or Credit Beijing.101 Many cities have also established more advanced quantified City Scores,
enabled by mass data collection and algorithms. Usually cooperating with private technology firms,
city trials utilize “a single numerical score (usually between 1 and 1000, like a FICO score), or a
letter grade (usually from A to D)” (Donelly 2021).
A typical example is Suzhou’s Osmanthus Score, which was implemented in collaboration
with Alibaba.102 Citizens start with an initial point score of 100, to which scores are added or
deducted based on citizen performance (the maximum is 200 points, and the minimum is 0). The
Osmanthus Score is calculated based on a range of data. 103 The Score can be built up “for donating
blood, volunteer work, and winning awards or special honors”, but the score can be lowered if

98
The central government criticizes the Sesame Credit Score as being too bold. Sesame Scoring can be used in the
commercial arena, but officials worry that it will arouse a potential backlash when applied to the social realm.
Nowadays, Sesame Credit is still influential, but is only constrained in the commercial domain, where Alibaba’s
customers are rated and rewarded.
99
For example, Baidu helps by updating the Credit China platform, and Tencent and Alibaba have been contracted to
advise credit programs in local cities.
100
See <https://nhglobalpartners.com/china-social-credit-system-explained/>
101
In this sense, they are part of the nationwide Blacklist/Redlist System.
102
The Osmanthus Score is named after Suzhou’s city flower: the osmanthus.
103
Source: https://www.sohu.com/a/117443710_349646

118
there is blacklisting by any of the central or regional authorities (Ahmed 2018, 50). People with
high scores can be rewarded with increased access to public services, such as libraries, public
transportation, and so on. Further benefits for high scorers are expected to be introduced in the
future, the range of which may be expanded into education, medical, and other social welfare
(ibid.).
The SCS is developing rapidly, even if it is an ambitious project which may take years to
be fully implemented. The year 2020 was not so much ‘a magic date’ for the finalization of the
SCS, but instead “the end of the initial planning period” (Zhou & Xiao 2020). 104 At present,
different government agencies still cannot standardize their data collection processes and share
information nationwide (Reilly et al. 2021).105 China’s latest five-year plan has mentioned the SCS
as a part of the ‘rule of law’, and a drafted Social Credit Law has been issued for internal review.106
The SCS is thus expected to become more integrated and standardized over the coming years
(Drinhausen & Brussee 2021, 1). Even then, however, the SCS will most probably not be a unified
system in assigning a single score to citizens, but rather be comprised of multiple systems that rate
and regulate people’s trustworthiness in tandem (Drinhausen & Brussee 2021, 1).

4.1.2 Beyond Orwellian Surveillance

Over the last few years, and especially since 2014, China’s SCS has attracted global attention.107
Most media coverage in the West has considered it to be a form of totalitarian surveillance. The
SCS is often compared to the dystopian society described in ‘Nosedive’, an episode of Black

104
Retrieved from <https://www.abc.net.au/news/2020-01-02/china-social-credit-system-operational-by-
2020/11764740>
105
Retrieved from <https://thediplomat.com/2021/03/chinas-social-credit-system-speculation-vs-reality/>
106
The national social credit law may cover what data can be gathered, what untrustworthiness actually means, how
to punish and reward people, and so on. Currently, many local governments have published their own regulations on
social credit. Many experts predict that the national law may be resemble to Shanghai’s credit regulations.
107
In public media, besides ‘SCS’, China’s social credit system is sometimes also called as “SoCS”. According to
Google Trends, the search volume for ‘social credit’ has rocketed since 2016, and most of the related queries are for
‘social credit score’, ‘social credit China’, and ‘social credit system China’. See
<https://trends.google.com/trends/explore?date=all&q=social%20credit>

119
Mirror, where everyone rates their interactions with others, which can in turn affect their
socioeconomic status. Alternatively, the SCS has been described as an Orwellian surveillance
technique for totalitarian control (Ohlberg et al. 2017; Galeon 2017; Botsman 2017). Mike Pence,
the former Vice President of the United States, has stated that “China’s rulers aim to implement
an Orwellian system premised on controlling virtually every facet of human life”.108 In this section,
I will argue that the SCS depends on widespread surveillance, but surveillance itself is not the end,
but rather a tool used to engineer trust in Chinese society.
It is true that China’s SCS project is highly reliant on a massive data surveillance. The SCS
is built by central and local governments, who have collaborated with large technology firms who
have the technical capacity to collect a large amount of data, ranging from the most private of
domains, to recording social and political conduct of citizens in the public sphere. This means that
under the SCS, individuals are almost “completely transparent to the government and its corporate
partners” (Ding & Zhong 2020, 4). As China is seen as an authoritarian regime, it is no surprise
that there has been a worry expressed that this is a case of Big Data meeting Big Brother, creating
an Owellian state (Botsman 2017). In such a dystopian world, individuals’ behavior and daily life
will be closely monitored, and their social and political conduct will be largely manipulated. This
comparison may, however, downplay the real goal of the SCS.
The SCS is a public and “relatively transparent system” in which people need to actively
participate, so as to foster an ideal image of citizens (Drinhausen & Brussee 2021, 18). By contrast,
to implement totalitarian surveillance, the Chinese government already has an arsenal of more
powerful tools at its command, such as Golden Shield, Skynet, Project Sharp Eyes, and so on
(Drinhausen & Brussee 2021; Dahlia 2020). These surveillance projects are more invasive and are
covertly operated, and often “act beyond the confines of laws and regulations, in a relatively clear
division of labor” (Drinhausen & Brussee 2021, 18). For example, during the pandemic, the
pervasive surveillance mandate of the Safe City Project operated an almost nationwide coverage
of cameras for COVID-19 contact tracing (Reilly et al., 2021). With such nearly seamless

108
Mike Pence addressed a speech remarked on China’s SCS in a report in 2018:
https://www.whitehouse.gov/briefings-statements/remarks-vice-president-pence-administrations-policy-toward-
china Accessd January 23, 2018. Or: https://www.hudson.org/events/1610-vice-president-mike-pence-s-remarks-on-
the-administration-s-policy-towards-china102018

120
monitoring, it seems that the Chinese government is not really in need of putting so much effort
into building another massive surveillance infrastructure.109
In practice, the SCS has a more particular goal – to engineer moral trust in society. By
studying Chinese media reports, the Merics Institute concluded that while there are many varied
aims associated with the SCS, “restoring social trust and honesty in Chinese society” and “creating
a culture of integrity” is the highest priority (Ohlberg et al. 2017, 6).110 On closer examination, it
becomes apparent that the 2014 Outline has clearly specified that promoting social trust is its
objective.

[Its] inherent requirements are establishing the idea of a sincerity culture, and promoting
honesty and traditional virtues, it uses encouragement for trustworthiness and constraints
against untrustworthiness as incentive mechanisms, and its objective is raising the sincerity
consciousness and credit (trust) levels of the entire society. 111

Most Chinese citizens perceive that China is a low trust society. According to a 2013 research
report on Chinese social mentality, the social trust index “dipped to a record low” (59.7 points out
of 100; Dan 2013).112 More than half of respondents felt distrust towards people and organizations,
and “only about 30 percent trusted strangers” (ibid.). In Kostka’s survey, 76 percent of the

109
Many worry that under the SCS, surveillance can be used to restrict the freedom of speech online and punish those
who protest against the government (Horsley, 2018). But as has been shown, the Chinese government has more
coercive, efficient and specialized tools to address those public and political security problems (Wade, 2018).
110
In that research, there are other two important goals: solving economic problems (such as “boosting market
efficiency and economic growth”, “strengthening food and drug safety”, “customer protection”) and improving
governance (“increasing government credibility”, “improving information exchange within bureaucracy”, and
“fighting corruption”). (Ohlberg et al. 2017, 6)
111
State Council. Shehui Xinyong Tixi Jianshe Guihua Gangyao (2014-2020 nian) [Planning Outline for the
Construction of a Social Credit System (2014-2020).] 14 June 2014. Translation online:
https://chinacopyrightandmedia.wordpress.com/2014/06/14/planning-outline-for-the-construction-of-a-social-credit-
system-2014-2020/
112
The Blue Book of Social Mentality (2013), conducted by the Chinese Academy of Social Sciences, made a survey
that covered “more than 1,900 randomly selected residents in seven cities including Beijing and Shanghai” (Dan 2013).

121
respondents “believe that there is an issue of mutual mistrust between citizens in China’s society”
(Kostka 2019, 20).
Yongnian Zheng, a prominent political scientist in China, points out that low trust in
Chinese society is “fostered by a loss of traditional rules and norms, as well as a lack of modern
universal laws” (Zheng 2019). 113 In Chinese history, traditional ritual propriety, especially
Confucianism, had reigned society for more than two thousand years, nurturing individuals’ senses
of shame and dignity (Tan 2016). However, the Cultural Revolution (1966-1976) pitted people
against each other, and largely undermined mutual trust in society (Creemers 2018). That is why
most Chinese people rely on social connections (guanxi) to deal with trouble, rather than seeking
legal solutions for help. Since the reform and opening up of society in 1978, China has transformed
increasingly into a commercial society, and its citizens migrants from familiar neighborhoods to a
stranger society. Individuals are driven to pursue the highest profits, and at the same time most
people find it hard to build relationships of trust among strangers. Growing wealth is therefore
achieved at the cost of a deep crisis of trust in society, ranging from “environmental and food
safety scandals”, to “violations of labor” and “widespread corruption and rent-seeking”
(Drinhausen & Brussee 2021, 4).
Low levels of trust within Chinese society may explain why most Chinese citizens approve
the SCS project. In Kostka’s study, 80 percent of respondents expressed opinions “either somewhat
approving or strongly approving SCSs”, while 19 percent took a neutral stance, and only 1 percent
expressed “either strong or somewhat disapproval” (Kostka 2018, 10-11). This research goes on
to claim that “citizens perceive SCSs not as an instrument of surveillance but as an instrument to
improve the quality of life and to close institutional and regulatory gaps, leading to more honest
and law-abiding behavior in society” (2018, 21). This empirical study clearly demonstrates that
the SCS is widely supported by Chinese citizens as a way to rebuild social trust.
To sum up, the SCS is reliant on a massive surveillance infrastructure, but it seems that its
surveillance is not just to track every detail of citizen life so as to achieve a level of Orwellian
control. Rather, this project is better understood as a huge effort seeking to engineer trust in
Chinese society.

113
See <https://www.thinkchina.sg/how-build-society-trust-china>

122
4.2 Automating Trust in the Big Data Age

In this section, I will show that the SCS should not be understood as a state of affairs unique to
China, but is instead indicative of a wider phenomenon of building algorithm-driven automated
trust in the Big Data society. This algorithmic trust is not built through interactions between
individuals, but is rather ensured by an automatic enforcement with disciplinary mechanisms. With
Big Data analytics, trust relations have become an automatic process, where people are not even
fully required in the formation of trust, but are only dependent on companies’ or governments’
culling of an individual’s digital traces, predicting their trustworthiness and enforcing punishments
at once.
When considering China’s SCS, a popular belief is that “the Mandarin term ‘credit’
(xinyong) carries a wider meaning than its English language counterpart” (Creemers 2018, 2). It is
assumed that credit in English only concerns people’s financial obligations, an understanding that
fails to “capture the deeper meanings of the word” in Chinese (Ding & Zhong 2020, 2). By contrast,
the Chinese term xinyong implies “a host of lofty moral virtues such as trustworthiness, promise-
keeping, norm abiding, integrity and general courtesy” (Dai 2018, 14). As such, the Chinese use
of credit is naturally and inherently ‘social’, and is used for “a broad and vague moral framing”
(Dai 2018, 14). In other words, in Chinese culture credit and social credit are the same thing,
which naturally covers both market behavior (financial capacity) and social conduct (integrity;
Zou 2021, 142). This point implicitly suggests that social credit is a particular phenomenon that
has only occurred in China, whereas credit systems in the West only calculate consumers’ financial
ability (creditworthiness), rather than their honesty and trustworthiness.114
In my view, this simplistic understanding of credit cuts off the internal connection between
credit in China and the West, which may hide the complex history of credit and what social credit
really means in modern society. It is true that in the Chinese language, the moral sense of the
Chinese term for credit (xinyong) is explicit.115 The Chinese character 信 (xinyong), compounded

114
This is why many Chinese officers announce that the extending market trust to the social realm via the SCS is in
line with Chinese culture, or Chinese national characteristics.
115
There is no exact Chinese word identified with English term ‘credit’, but it is widely accepted that ‘credit’ translates
into ‘xinyong (信⽤)’, which is associated with the moral virtues of trustworthiness.

123
from 亻 (person) and ⾔ (speak), literally means that a person makes a promise and then realizes it.

This ethical meaning can be vividly explained in the oracle 允 (the original Chinese character of

信), the image of a person who bows his head with his hands hanging down, which is to show

respect and integrity. 116 Nevertheless, the term ‘credit’ in the West also has a similar moral
meaning.
The term ‘credit’ stems from the Latin credo (Sithigh & Siems 2019, 25; Muldrew 1998,
3), which developed into the Italian credito (to trust, entrust, believe), and then to the Middle
French crédit (belief, trust) around the 15th century. During that time, the word often bore the moral
and religious character of trust and belief.117 In the Western history, credit and credit relations are
not reduced to a merely economic behavior motivated by the desire for profits and commodities.
As Muldrew (1998, 3) shows, this purely economic interpretation of credit did not exist until in
the late 17th century, when capitalism was commencing its rapid ascendancy in Europe. Before
that period, credit was a term often used with reference to “a person’s reputation, the estimate in
which his or her character is held, their repute or trustworthiness” (Muldrew 1998, 3). Today, as
shown in the Oxford Dictionary, we can still see that ‘credit’ has the meaning of “a person or thing
whose qualities or achievements are praised and who therefore earns respect for
somebody/something else”.118
In short, the word implies the moral virtues of trustworthiness in both Chinese and Western
contexts. 119 This moral connotation is even commonly found in the credit scoring industry in
modern Western society. In the West, credit surveillance has long been an effective tool to promote
honesty and integrity, the effects of which are similar to laws and morality (Lauer 2017). At that

116
“允(yun)” in the oldest Chinese dictionary, Explaining Chinese Characters (说⽂解字).
117
“Credit”. https://www.etymonline.com/word/credit#etymonline_v_46419 Online Etymology Dictionary.
Retrieved 5 November 2019.
118
See ‘credit’ in the Oxford Dictionary:
https://www.oxfordlearnersdictionaries.com/definition/english/credit_1?q=credit
119
Compared with the West, according to French sinologist Francois Jullien, the Chinese 信 (credit) tends to emphasize

‘trust’ and ‘attachment’, rather than the religious meaning of ‘belief’ or ‘to believe’, as the West does. Jullien holds
that “only when Buddhism, especially Christian, introduced to China,” the Chinese 信 (credit) begins to have the

religious meaning. See: https://cchc.fah.um.edu.mo/wp-content/uploads/2018/05/02_Francois-Jullien.pdf

124
time, creditworthiness was often linked to one’s moral character, and qualities such as honesty and
integrity. Some may argue that in the credit-scoring age, a numerical credit score is only like a risk
‘code’ for credit consumers “to unlock and to lockout across time and market spaces, and to
determine the interest rate that the credit consumer will pay for a particular product and at a specific
moment” (Langley 2014, 17). As a result, the credit scoring process is detached from an
individual’s real character, becoming a pure market strategy to calculate the potential risks of credit
customers.
However, this line of argument is not very convincing. As Lauer shows, a scoring system
is still basically about evaluating an individual’s character, in which financial and moral qualities
cannot be untangled. It is true that computerized credit scoring has depersonalized credit
evaluation, and creditworthiness has been calculated as credit risk. But this scoring process “cannot
eliminate the underlying morality of creditworthiness” (Lauer 2017, 21). The judgment of one’s
character may have been replaced by the language of credit risk, but the risk still reflects people’s
characters across qualities such as integrity or responsibility.
This moral connotation of credit is clearly revealed in the fact that credit scores are widely
applied in some non-financial areas in the United States. For example, the Society of Human
Resources Management (SHRM) conducted a survey, which showed that about 60 percent of
employers in America performed background credit checks for their job candidates.120 Insurance
companies now use credit checks to decide which insurance rates should be assigned to drivers,
houseowners, and even patients (Langenbucher 2020, 527). As has been discussed at length, credit
scores have even found their way into the online dating industry, including on website that
promotes itself as a place “where good credit is sexy”.121 In 2017, Match.com collaborated with
Discover, conducting a massive survey on online daters to reveal the relationship between credit
scores and romance. When asked what a good credit score means for humans’ character, “73
percent of survey respondents said responsible, 40 percent said trustworthy and 38 percent said

120
This situation may have been improved in recent years. As reported by the National Association of Professional
Background Screeners and HR.com, about 25 percent of HR professionals still conduct credit checks on their job
candidates. See <https://www.shrm.org/hr-today/trends-and-forecasting/research-and
surveys/pages/creditbackgroundchecks.aspx>
121
See the dating website: https://www.creditscoredating.com/

125
smart” (Discover & Match Media Group 2017).122 In these cases, credit scores are used not only
to gauge the likelihood of financial default, but also to judge one’s “honesty, responsibility, and
overall tendency toward conformity or friction” (Lauer 2017, 22).123
It is therefore a misconception that China’s SCS is about calculating one’s honesty and
trustworthiness, while the credit scoring systems in the West (like the FICO Score) only consider
consumers’ financial capability. In fact, the line between the financial and social realms is not as
clear-cut as most people imagine. This is especially true, as Big Data technology makes it possible
for credit bureaus to access people’s financial data, as well as content shared on social media
(Herley & Adebayo 2016). As a result, the financial and moral qualities of individuals have been
increasingly blending with each other. Credit scores are more than simply a calculation of the risks
of credit users, but rather reflect people’s general character in terms of trustworthiness and
responsibility.
Admittedly, credit scoring systems in the West are not as unified and centralized as they
are in China. They are not designed by state but are instead often operated by private companies
“directed toward the maximization of their economic objectives as profit making enterprises”
(Backer 2018, 26-27). Moreover, the West has more mature legal systems to regulate
overwhelming credit surveillance, so that basic human rights can be protected from infringement.
However, a similar logic is shared by both China and the West: they use data and algorithms to
normalize people’s trustworthy behavior. As Larry Backer rightly shows, the phenomenon of
social credit happens not only in China, but also in the West. They both build “rating systems as a
mechanism for disciplining behavior”, in financial as well as non-financial domains (Backer 2018,
34).
I therefore propose that the SCS can be seen as a paradigm of engineering an algorithm-
based ‘trust’ in the Big Data era.124 These rating systems basically consist of three interconnected

122
See <https://www.businesswire.com/news/home/20170821005049/en/Online-Daters-Say-a-Good-Credit-Score-
is-More-Attractive-Than-a-Fancy-Car>
123
As John Espenschied, the owner of Insurance Brokers Group, explains: “You can have a clean driving history with
no driving violations or accidents, but still be penalized for poor credit, which means higher insurance premiums.”
This happens because drivers with poor credit are often considered to be less responsible, and thus pose more risk
when driving cars. See <https://www.insure.com/car-insurance/bad-credit-history.html>
124
This point reminds us of Habermas’s critique of technocratic social engineering, which precedes the colonization
thesis but arguably also is taken up by it. Essentially technocracy consists in looking for expertise-led technological

126
elements. First, access to a database, or the use of targeted data harvesting, where a wide range of
data can be gathered for further analysis. Second, they exploit algorithms that can interpret, decide
and single out what is trustworthy and untrustworthy behavior. Third, the incentive-and-
punishment mechanism guarantees trust. This algorithmic trust is often automated, because the
trust is derived from the automated enforcement of penalties and incentives. Automated trust
assumes that trustworthy individuals can be immediately rewarded with benefits, while the
cheating individuals will be punished at once. This swift enforcement is guaranteed by massive
data collection and big data analytics, which can instantly classify and single out the untrustworthy.
If a person jaywalks, for instance, a camera can automatically detect the rule-breaking behavior,
which is then automatically recorded in credit profile and triggers punishments automatically.125
Moreover, the automated trust is reliant on a form of self-enforcement. Unlike laws and
regulations, rating systems are not grounded in ‘command obedience’, but in assessment,
punishment and incentive (Backer 2018, 18). Legal subjects are passive in the sense that they
“must be made to obey” (Staples 2000, 4). They will not obey if no evidence is given for their bad
conduct. However, these rating systems appeal to “active positive participation rather than passive
obedience to the letter command” (Backer 2018, 15). Based on such a reward and punishment
mechanism, rating systems can produce a degree of self-enforcement in which individuals
subjected to that scoring system tend to participate in self-evaluation, and then change their
behavior to meet the expectations defined by algorithms. This is “a training of the mind and a
development of the soul” towards being, rather than “only a superficial influence over individual
behavior” operated by the force of law (Lauer 2017, 135).
For such an automated or algorithmic trust, many scholars may express support, since these
algorithmic rating systems can help us overcome what Rachel Botsman calls the trust gap between
“us and the unknown” (Botsman 2017, 25). According to Botsman, the sharing economy becomes
popular and successful because algorithmic rating systems can “make unknown sellers seem
familiar to people”, which can reduce uncertainties and vulnerability to make transactions possible

solutions (like SCS) to essentially practical problems (like trust), with Habermas insisting on the distinction between
technical and practical questions and on the fact that practical questions can only be resolved in public discourses
about “how we would like to live”. I thank Robin Celikates for pointing this out.
125
The reality is that some records of discredited behavior are manually submitted, and not instantly assessed at all.
But the automated punishment process has become increasingly common in Big Data society.

127
(2017, 21). A good example is Uber, a typical ride-sharing application where drivers and
passengers can rate each other. If drivers have relatively low scores, they can be immediately
punished by the algorithm-driven platform, restricting their access to passengers, or even
eventually being removed from the platform. If passengers are rated poorly by past drivers, the
platform will make sure that fewer drivers volunteer to pick them up.
Algorithmic rating systems can unlock the potential to trust unfamiliar strangers, providing a
new mechanism to assure trust. In the past, commercial transactions had to be dependent on “a
centralized authority such as a government or a bank” (Botsman 2017, 50). But now people can
trust each other thanks to mediation by data-driven records and a rating system.

Trust that used to flow upwards to referees and regulations, to authorities and experts, to
watchdogs and gatekeepers, is now flowing horizontally… to programs and bots.
(Botsman 2017, 8; emphasis added)

I do not deny the advantages of such an algorithmic trust. In a society of strangers, we do need
some social mechanisms to assure that the strangers who we are interacting with are reliable.
Otherwise, everyone distrusts everyone else, and the society may collapse; or at least, it will greatly
increase the social costs for human activities (Fukuyama 1996).
What is and should be of concern is that some negative consequences can possibly result
from the use of algorithms – an automatic and de-contextualized technology – in maintaining the
‘social trust’. While algorithms can help overcome the ‘trust gap’ and rebuild our connection with
each other, at the same time they have the potential to cause some even deeper problems with our
trust relations. This is a concern familiar to Botsman, who worries that people’s over-reliance on
ratings and reviews may lead to “a kind of digital purgatory” in which “one mistake or
misdemeanor could follow us potentially for the rest of our lives” (Botsman 2017, 8-9).
Automated trust can result in some further consequences. Following the algorithmic
colonization framework, I will argue that algorithmic trust can be understood as a colonization of
trust, which may curtail open-minded and respectful interactions and exploration with others. Big
Data technologies can lead to what Zuboff calls a state of ‘uncontract’ (2019, 211). The meaning
of ‘uncontract’ is that in a Big Data society, people are algorithmically deemed to be trustworthy
or not, and there is no need for any mutual agreement or contracts. In her words, this process is a

128
‘unilateral execution’, where there is no requirement for social interactions to promise, negotiate,
or compromise with others. What is needed is only the automated enforcement of penalties. Let’s
consider an example given by Zuboff (2019, 295). If a car owner cannot repay a car loan, the credit
company can use its database to find the car and control an automatic trigger to shut down the
car’s engine immediately, even if there is an emergency, such as transporting a pregnant woman
to the hospital. Big Data technology has enabled companies to inflict such automated punishments.
Trust turns out to be an automatic mechanical process, ensured only by algorithms.
Admittedly, it may be too early to claim that automated trust has become dominant in our
current society, as many rating systems are not ‘smart’ enough to automatically detect discredited
behavior, nor to punish people immediately. For some local versions of the SCS, for example,
information even has to be manually inputted, rather than through automatic gathering and analysis
by algorithms (Drinhausen & Brussee 2021, 20). Despite this, as society is heading towards
growing ‘smarter’, driven by AI and Big Data technologies, it can be anticipated that such
automated trust may become increasingly dominant in the near future. Therefore, by investigating
algorithmic trust, I intend only to disclose such a tendency or a potentiality, that technological
rationality can drive out open and respectful interactions in our society.
Nonetheless, such algorithmic trust is not only found in the economic sphere, as Zuboff
has described in the case of the car loan punishment. In the economic sphere, the automated trust
ensured by those rating systems is not that technological nightmare; instead, it can (at times) even
reduce risk and transaction costs (as in the case of rating hotels on Airbnb, rating drivers on Uber,
or rating sellers on Amazon). After all, the economic domain is about efficiency and maximizing
profits. However, what is more problematic – and what is going to be investigated here further –
is when such an algorithmic trust has stretched, or has become dominant in social and civic
domains, where social interactions and mutual sharing are needed. To investigate this further, I
will mainly focus on China’s Social Credit System, a typical example of algorithmic trust
established not only in economic transactions, but also in social interactions. I will closely consider
how algorithmic trust can potentially constrain free and respectful interactions in cultural, social
and personal domains.

4.3 Algorithmic Trust as Reliance: A Culture of Objectification

129
In this section, I will argue that credit scoring algorithms can establish a form of ‘trust’ that is not
derived from mutual interaction, but rather from unilateral execution – an automated enforcement
of penalties. This ‘trust’ is not interactive, but an objective reliance. As a result, people treat others
only as objects, rather than as people who should be interacted with. This dehumanization in trust
relations contributes to a culture of objectification and excessive punishment.
The SCS, driven by algorithmic rationality, runs the risk of reducing interactive trust to a
mere reliance, and the treatment of people as objects. In the literature on trust, many have argued
that trusting someone is different from merely relying on that person or some object (Holton 1994;
Seligman 1997; Lahno 2020; Davidson & Satta 2021). A core feature of reliance is that people
often treat the reliant person “from an objective point of view”, just like “a physicist would look
at the particles in this atom smasher’” (Lahno 2020, 151). In reliant relationships, people only
depend on evidence or statistical analysis to show the trustworthiness of people under various
circumstances. Bernd Lahno gives the example of embezzlement to clarify this point. Suppose if
some strong evidence shows that your friend has embezzled one million euros in company funds
but insists that he is innocent. From an account of reliance, most people would likely conclude that
your friend has committed a crime based on the available evidence. But as a friend whom you trust,
you may persist in the view that the alleged crime does not fit in with the character of that friend.

You will be looking for explanations of the evidence that are consistent with your friend’s
innocence. You do not just evaluate the given evidence in some objective and indifferent
way. The special relationship that connects you and your friend will rather motivate you to
see the given information in a certain light and question it critically (Lahno 2020, 151).

Trust is not reliance, because trust is basically an interaction between a trustor and a trustee, which
‘involves a mutual relation’ shared with reasons, as well as with passion and emotion (Origgi 2020,
89). In this sense, trust is about a more open-minded and participatory exploration than a purely
cognitive process or a passive calculation of data. As Richard Holton argues, “trust involves
something like a participant stance toward the person you are trusting” (1994, 66). If a reliable
machine breaks down, we feel disappointed; but if a friend lets us down, we are not just
disappointed, “we feel betrayed” (ibid.). That is because people have mutually interacted, and their
acts are therefore essentially interrelated. In the embezzlement case, you are not a bystander or a

130
scientist who observes and judges your friend’s case from a distant and objective stance. Rather,
you are holding ‘a participant stance’, directly involved in the process. As Lahno shows, this
participant stance means people have emotional dispositions directed at others (such as anger,
regret, or gratitude; Lahno 2020). 126
In the case of SCS, the credit scoring algorithm attempts to build a ‘trust’ that is achieved
through mutual interactions, but only through a unilateral calculation and enforcement by the
powerful (e.g, the Chinese central and local governments). The Chinese government is essentially
calculating who is ‘trustworthy’ and who is not, based on massive data tracking and analysis.
Chinese citizens thus become people who are passively waiting to be judged and subjected to
algorithmic calculation.127 This unequal power relation makes interactive trust become a system
of utilitarian judgments and calculations on an individual’s reliability. The powerful use
algorithms to take a distant perspective, surveilling and measuring to what degree an individual is
reliable. Subjects are treated not as human beings, but as abstract numbers or objects.
The credit scoring algorithm is automatically implemented, following a step-by-step set of
operation. It analyzes and predicts whether people are trustworthy or not, and then uses rewards
or punishments to ensure a continual state of reliance. This automatic process is highly
dehumanizing in the sense that it does not care much about context and real persons. Such a
tendency towards dehumanization has two specific dimensions of disconnection of trust from real
individuals.

126
That is why John Dunn distinguishes trust as ‘a human passion’ and trust as a ‘modality of human action’ (Dunn
2000). The former is “the confident expectation of benign intentions by another agent”, while as a modality of action,
trust is only a strategic thing, which is “a more or less consciously chosen policy for handling the freedom of other
human agents or agencies” (2000, 74). Williamson (1993, 484) puts it this way: “To trust in someone does not imply
confidence in their judgment. Rather, to trust is to ascribe benign intent”.
127
The reduction of trust to reliance makes people not only see others, but also themselves as objects. The City Scores
make citizens regard themselves as players in a must-win game that is competitive, playful and addictive. To win the
game, citizens have to adapt themselves to a system of norms that is prescribed by scoring algorithms. For example,
what behavior is trustworthy and what is not? What is the easiest way to boost scores – blood donations, volunteering
services, or visiting your parents regularly? How often does sorting trash boost the scores? All of these norms are
gleaned online or shared by others, or learnt by heart, to improve scores. Following these algorithmic norms, citizens
are only concerned with becoming high scorers, and do not care about pro-social values like hospitality, compassion,
and social responsibility.

131
The first dimension is that data and algorithms decontextualize human experience. Human
experiences are captured, subjected to calculation, and translated into a language of code and
mathematics. Conducting volunteering work is a subjective and qualitative experience, tied to
people’s emotions of empathy and benevolence. But in some pilot city scoring regimes,
volunteering increased an individual’s scores; wearing masks during Covid-19 pandemic also
increased scores (Drinhausen & Brussee 2021, 15). Volunteering services and social compliance
are thus homogenized as the same thing, in that they can be used to boost scores by ten points (van
Blomberg 2018). With algorithms, human experiences are simply translated into standardized
numerical points. Botsman states that trust algorithms fail to “embrace the nuances, inconsistencies
and contradictions inherent in human beings”128 (Botsman 2017). Algorithmic systems do not take
into account whether the blacklisted or score-assigned person misses “paying a bill because they
were in hospital”, or whether they are simply freeloaders (ibid.).
The second dimension is that people’s behavior is effectively uncoupled from punishment
and reward. In penalty law, someone is punished only for their specific criminal acts. But for the
Blacklist/Redlist Score and Chinese City Scores, “trust-related acts will not directly be linked to a
penalty” (van Blomberg 2018, 98). Instead, all of those acts are recorded together in a complex
and composite trustworthiness database, based on which people will get rewarded or punished. If
someone is blacklisted for not visiting their parents regularly, they could be as equally punished
as someone who does not pay bills. Punishments are imposed equally on everyone who is
blacklisted, irrespective of the nature of their untrustworthy actions. For City Scores, these faceless
and undifferentiated punishments will be further generalized into quantified scores (van Blomberg
2018). People with low scores will be immediately punished, no matter whether the low score can
be reasonably justified or not.
This dehumanizing feature of algorithmic trust may have the potential to contribute to a
culture of objectification and excessive punishment. In Modernity and the Holocaust (1989),
Bauman reminds us that a serious problem of ‘moral invisibility’ can arise when a real individual
is separated from the consequences of an action. In the context of face-to-face interaction,
individuals have an innate moral impulse to not only understand and empathize with others, but
also to take responsibility for others (Bauman 2010, 25). This is constitutive of the participant

128
Retrieved from <https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion>.

132
stance towards others. The reduction of trust to reliance will lead to a bypassing of moral agency,
where people are reduced to instrumental objects so that others can achieve their own goals. Only
focusing on credit scores and forgetting the experience of real human beings can incorrectly justify
inhumane punishments.
In the 2014 Outline, Chinese authorities bluntly state that the SCS serves to ensure that
“the trustworthy benefit at every turn and the untrustworthy can’t move an inch.”129 Under this
principle, the trustworthy are seen as absolutely good, whereas the untrustworthy are treated as
uncompromisingly bad. Under such a credo of unforgiveness, the imposing of inhumane
punishments on citizens are justified.
The laolai who fail to make repayments could be publicly named and shamed in different
manners.130 They are forced to use a special ringtone that sounds like a siren, and is accompanied
by a warning of not doing business with that individual (Hayward 2019). In doing so, the
punishment ensures that the laolai are constantly humiliated when they are called by their friends,
family members or business partners. A cinema in Sichuan province screened the names of laolai,
along with details about them, before the start of movies. In Anhui province, the detailed private
information of laolais including their names, photographs, and ID numbers are “flashed across
billboards and giant screens in public squares” (Handley & Xiao 2019).131 In Shenzhen, jaywalkers
can be captured by facial recognition technology, which not only influences their city scores, but
also results in the “display (of) the faces of jaywalkers on large LED screens at intersections” (Tao
2018).132
All these ‘innovative’ punishments try to exclude offenders by shaming and humiliation.
This exclusion strategy can be reinforced by social network platforms. In Hebei province, people
can scan for laolai within a 500-meter radius through WeChat (a Chinese version of Facebook),
just like an air force radar scanning for enemy planes. By clicking on a marked laolai, it will reveal
who that person is, where he or she lives, and why that person has been negatively tagged as
untrustworthy. Because WeChat is used nationwide, this nicknamed ‘Deadbeat Map’ may

129
See <https://www.chinalawtranslate.com/en/socialcreditsystem/>
130
China’s highest court even encouraged lower courts to deploy more creative punishments (Xu & Xiao 2018).
131
See <https://www.abc.net.au/news/2019-01-24/new-wechat-app-maps-deadbeat-debtors-in-china/10739016>
132
See <https://www.scmp.com/tech/china-tech/article/2138960/jaywalkers-under-surveillance-shenzhen-soon-be-
punished-text>

133
potentially be applied all through China. With this map, people would easily figure out who is
laolai and then efficiently exclude them.
This sort of social exclusion can have a ripple effect, in that people socially connected with
laolai may be negatively impacted. With Alibaba’s Sesame Score, a consumer’s social connections
is one the five data points that taken into account when calculating scores. In that algorithm, a
person’s Sesame Score can be badly impacted if his or her friends have a poor score. That may
encourage people with high scores to cut off social connections with people with low scores.
Currently, this algorithm has not yet been applied to City Scores or the Blacklist/Redlist program.
However, if this happens in the future, it could produce a new segregation not based on race, but
rather on social credit scores. This trust exclusion can lead to a kind of ‘social death’ where people
who have lost their social relationships and roles are treated as if they are not fully human, or as
sub-human, by the wider society (Patterson 2018, 5).
The goal of all these exclusion and segregation strategies is to instill a sense of guilt and
shame, which motivates the untrustworthy to change their behavior. Unless improving their scores
or paying back debts, the untrustworthy would suffer an almost permanent sanction. However,
improving scores or paying off debt may not be that easy. Ding and & Zhong (2020, 8) express
the concern that the SCS could trigger a serious snowball effect: “Owing debt lowers one’s social
credit score, which in turn limit one’s access to resources and information, thereby further
diminishing one’s ability to make money and repay the debt”. If people’s social connections are
considered in City Scores, it is possible that those who have broken the trust may find it hard to
improve their scores, since the exclusion can reduce “the chances of the untrustworthy to regain
credibility by forming relationships with the trustworthy” (Ding & Zhong 2020, 9).
In closing, algorithms can reduce interactive trust relations into an objective reliance. As a
result, trust relations are detached from context, and people are perceived as objects instead of real
humans that can be meaningfully interacted with. This dehumanizing process may lead to a culture
of objectification and excessive punishment.

4.4 Algorithmic Trust as Social Compliance

134
Algorithmic rating systems not only calculate and predict untrustworthiness, but also work toward
the goal of controlling and regulating the untrustworthy. This section will examine how such a
logic of control can hinder and distort open-minded and respectful interactions in developing trust
relations. The SCS was announced as a project that aimed to raise ‘trust’ across the whole Chinese
society, but since there is a lack of publicly determined norms on what is trustworthy and what is
not, the SCS’s use of credit scoring algorithms tends to build a de facto social compliance, rather
than interactive trust. This social compliance can be very problematic when it is misused to
politically control citizens.
The SCS uses rating systems to track, single out, and punish those who disobey the rules.
In this sense, the tool that the SCS uses to construct ‘trust’ is inherently a technology of control.
Generally speaking, to control someone means to monitor and intervene on that person’s behavior.
When monitoring takes place, the monitored must follow the rules, and interventions are
introduced when deviations happen so that it could “positively cope with them and adjust the
process” (Castelfranchi & Falcone 2010, 191). James Rule understands control as the mechanisms
which “discourage or forestall disobedience, which either punishes such behavior once it has
occurred, or prevents those with inclinations to disobedience from acting on those inclinations”
(Rule 1973, 20).
I argue that the technology of control that the SCS applies to establish ‘trust’ can have the
potential to undermine the interactive infrastructure of trust itself. Intuitively speaking, control
runs counter to trust: if you control someone, you do not trust that person. The technology of
control may undermine the development of trust in the sense that it can impede an individual’s
participatory motivation. As Lahno argues, measures of control may “tend to destroy the normative
basis of trust” (2020, 157). Lahno gives the example of a working team in which everyone is
working hard to pursue a collective goal. Each one works hard because they trust each other by
sharing a common belief: if each one works hard on his own part of the project, they will achieve
their collective goal. But now, for whatever reason, the leader “announces henceforth to control
the compliance of each team member meticulously and to sanction severely all behavior which
they find defective” (2020, 157). Such an initiative of control will erode trust, as it undermines
people’s inner motivations of trustworthiness. In this sense, people are not freely engaged in an
open-minded and respectful interaction to build trust relations with others, but are rather forced or

135
disciplined to do that. This outside force will distort open interactions with others, that would
otherwise develop trust relations.
The algorithmic control can potentially make interactive trust itself not necessarily needed
anymore. The logic of control is there to reduce risk; by controlling misbehavior, for example, the
SCS algorithmic system tries to reduce and even eliminate risk due to untrustworthiness.
Nonetheless, trust is not about an elimination of risk. On the contrary, it often requires some extent
of risk. As Seligman (1997, 21) points out, trust is inherently linked to risk and uncertainty, since
trust is often developed in the situation where “the acts, character, or intentions of the other cannot
be confirmed” (Seligman 1997, 21). Risk is thus a trigger or chance for people to mutually interact
with each other, and to face risk together. In this sense, trust is an interactive process where people
are willing to take on the burden of risk – risk of being betrayed. By contrast, the core function of
control is to eliminate risk. With the assurance of a reduction of risk that such a technology of
control provides, the interaction required to build trust is no longer required. As with Habermas’s
de-linguistified media of money and power, the algorithmic rating systems as a technology of
control can make it so that people do not need to argue, negotiate and have a dialogue with each
other. SCS is a technology that ensures a ‘trust’ relation, without any need for human interaction.
Thus, algorithmic rating systems not only distort the interactive infrastructure of trust, but
also can make interactive trust itself impossible. As a result, trust loses its originally positive force
in producing and reproducing mutual interactions. Instead, trust relations are only relegated to the
fulfillment of systemically defined commands and norms, necessarily ensured by automated
punishments. The technology of control can be used to enforce promises in order to extend ‘trust’,
but the ‘trust’ secured by technological power is a mechanism of control which is used to enforce
obligation and compliance. As Fukuyama points out, people “who do not trust one another will
end up cooperating only under a system of formal rules and regulations” (1996, 27).
People may argue that trust relations also need some sort of disciplinary mechanism. In
Trust and Power (1979, 33), Niklas Luhmann shows that in trust relations, the people involved are
often required to follow some norms to win the trust of others. However, these norms and rules
are very different from disciplinary norms in algorithmic rating systems.
Trust inherently runs against control, but this does not mean that there are no norms or
obligations in trust relations. The upshot is that such norms are not authorized but instead shared
by each actor through communicative actions. For Habermas, human interaction needs to be

136
sustained by normative codes and values (Habermas 1996). For communicative action, these
norms are not prescribed by a charismatic authority, market or legal power, or the operation of any
algorithms. Rather, they are to be constructed on a shared understanding among actors which is
not fixed, but negotiable. That means that, if necessary, the norms can be challenged in order to
reach a relationship of trust among individuals. On the one hand, people share aims, values or
norms, so they generate a participant stance and a sense of connectedness between each other. On
the other hand, these shared norms trigger normative expectations under which the trusting persons
should carry the risk of trusting the others. So, the key point is whether such norms about
trustworthiness are negotiable, open to challenge, and shared by all actors.
With the SCS, however, the norms of what is trustworthy and what is not are often
prescribed by the disciplinary system itself, and the disciplinary power of norms are often assured
by automated penalties for citizens. Citizens can hardly participate in deciding what these norms
are or should be.
At the end of 2020, Chinese central government published a crucial document stressing
that whatever behavior is seen as trustworthy or not, should be aligned with laws and regulations
(Xinhua 2020). 133 This means that with SCS, the norms regarding trustworthiness have two
sources: laws and regulations. Many have criticized the SCS for never clearly defining what is
trustworthy, and what is not (Xin 2018; Drinhausen & Brussee 2021). That is why everything
related to trust can be lumped into the SCS framework (Zou 2021). For example, discredited
citizens will not be allowed to take high-speed trains or luxury flights; children are not allowed to
go to private schools because their parents are blacklisted. Some City Scores even take into account
whether citizens visit their elderly parents regularly or not.134 But in fact, many of those norms
about what is trustworthy are already defined by Chinese law. If people fail to repay debt, they are
not allowed to consume expensive and luxury products or use certain services. The Chinese law

133
Source: the original Chinese document: http://www.gov.cn/zhengce/content/2020-12/18/content_5570954.htm A
English interpretation of that document: https://www.chinadailyhk.com/article/150673 As a national Social Credit
Law is expected to roll out within a decade, the SCS will become increasingly standardized, and those norms will be
more closely regulated by law.
134
These norms are partly overstated by news coverage. The discredited persons are only disqualified from taking
first-class seats in high-speed trains and planes. They can still take other normal-speed trains and take second-class
seats, which are far cheaper. Children are only prevented from attending private schools, which are often more
expensive.

137
titled ‘Protection of Rights and Interests of the Elderly’ has already required that Chinese citizens
be obliged to visit their elderly parents regularly.135
Nevertheless, the ambiguity of norms about what is trustworthy or not is still a fundamental
problem. Even if the law standardizes the norms, it is still highly doubtful that those people’s
behavior should be standardized in the first place. Admittedly, many Chinese citizens support the
idea that some uncivil behaviors are worthy of regulation, such as dropping trash from high-rise
apartments, jaywalking, and playing loud music on public transportation (Global Times 2019;
Kostka 2019). The public also believes that some good deeds are to be encouraged across society,
such as sorting trash, following traffic rules, visiting parents regularly, or doing volunteer work
(Chiu 2020). But how are these actions related to one’s trustworthiness? Should they simply be
included in the SCS? This question is still unanswered and needs to be discussed publicly in order
to reach a shared understanding across the whole society.
What’s more, concerns still exist that local governments can set their norms of
trustworthiness in a random way. As has already been mentioned, the central government requires
that what is deemed trustworthy or not should be judged in accordance with laws and
regulations.136 Laws are relatively stable, but regulations are often temporarily set up by local
governments, and are unlikely to have been publicly decided. That leaves room for local authorities
to make random use of regulations for social governance in the name of trust (Chen et al. 2018).
For instance, during the COVID-19 pandemic, many cities “blacklisted citizens for as little as not
wearing a mask” (Drinhausen & Brussee 2021, 15). In some pilot cities, behavior such as walking
dogs in parks, digging up wild vegetables, and hurting animals are included in the ‘blacklist of
uncivilized behaviors’ and posted in public (Juan 2019). The question arises as to whether all those
blacklisted uncivil behaviors are really serving the interest of policing trustworthiness.
Some have criticized the ‘inaction and laziness’ of local authorities, who seem to want to
have every regulation-breaking behavior conveniently monitored by technology-enabled

135
On December 28, 2012, the Chinese National Congress voted to pass the newly revised ‘Law on the Protection of
the Rights and Interests of the Elderly’ (in 85th provision, Chapter 9), in which family members living separately from
their elderly parents should often visit or greet them. That means visiting old parents regularly has been officially
written into law. See <http://www.lawinfochina.com/display.aspx?id=12566&lib=law>
136
See <https://www.chinadailyhk.com/article/150673>

138
enforcement (Toledo 2019). 137 Automated punishments and rewards remove the possibility of
developing a more interactive trust relation between citizens and local governments. Moreover,
when trustworthiness only means people’s conforming to regulations, where rule-breakers are seen
as untrustworthy, there is also the potential danger that local authorities can use this tool to operate
strict social and political control. In an early city pilot in Suining, citizens’ political stance was
even taken into account when determining city scores (Zou 2021). In Anqing, one of the model
pilot cities, a citizen was blacklisted “for ‘causing panic’ by posting a video of an ambulance taking
away a suspected COVID-19 patient” (Drinhausen & Brussee 2021, 14-15). Some also worry that
such ‘rule by trust’ can be used to restrict the freedom of speech online, and punish those who
protest against the government (Horsley 2018).

4.5 Algorithmic Trust as an Ideology

Until now, I have been arguing that the SCS, as an algorithm-powered disciplinary system, has the
potential to take the place of the open and participatory interactions that are necessary for building
a trust relation. I argued that what the SCS establishes is not trust, but objective reliance and social
compliance supported by or forced through algorithmic control. This section will proceed to show
how algorithmic trust becomes an ideology used to hide the power of algorithmic control in the
SCS. Most existing literature dealing with the SCS only focuses on its ‘hardware’: the widespread
surveillance technology, blacklisting and mechanisms, and various sanctions. Few studies look
into its ‘software’ – the propaganda and educational campaigns of trust, which are at least equally
powerful in shaping citizen behavior. In this section, I will investigate how trust propaganda can
shape people’s algorithmic imaginary, making them regard algorithmic trust as a moral goodness,
leading them to identify themselves with the prescribed norms.
The notion of ideology has been widely used and has different meanings in different
contexts (Celikates 2006; Jaeggi 2009). I will follow a general understanding of ideology as
expressed in Habermas’s early text ‘Technology and Science as Ideology’ (1970). Habermas
points out that technocracy has become a new ideology in post-industrial society, insofar as it
enables “the elimination of the distinction between the practical and the technical” (Habermas

137
Retrieved from <http://www.bjreview.com/Opinion/201912/t20191209_800187137.html>

139
1970, 113) and thus occludes the need for politics (or rather makes politics disappear between
technocracy and decisionism) (Celikates & Jaeggi 2018, 263).138 Not only is technology a means
of social control, but it is also an ideology in itself, in which the technocracy is so efficient and
useful that it can legitimate itself to replace the public discourse that needs human communication
and reflection. This ideology refers to a set of ideas or beliefs that justify an existing power
structure, restrict an individual’s capacity to understand social reality, and hide power relations
from open scrutiny and debate. In this section, I will argue that algorithmic trust itself can also
promote a particular ideology that hides its power of algorithmic control, and restrains people’s
ability to think of other possibilities.
As has been argued at length, instead of trust, the SCS engineers social compliance, ensured
by algorithmic control. This process of building compliance by algorithmic control can be seen as
a de-moralization of trust, since it makes ‘trust’ detached from context and real individuals, which
are necessary conditions for interactive trust. Nonetheless, such algorithmic control is largely
hidden, as the Chinese government intentionally uses propaganda and trust education to re-
moralize ‘trust’, so as to hide the true power of its algorithmic control. Packaged with the moral
imperative of trust, algorithmic control is immediately framed as a desirable thing and a universal
good. It thus promotes an ideology that renders algorithmic control unchallenged. This process can
be seen in the nationwide trust propaganda campaign used to support the SCS project.
On a national level, the central government’s Publicity Department leads a national trust
campaign called ‘A ten-thousand-mile tour for constructing trustworthiness’ (‘chengxin jianshe
wanlixing’) (Zhang 2020).139 The campaign has carried out a series of public activities to advertise
trustworthiness, and ‘educate’ citizens. The campaign has manifested itself in the form of integrity
posters, promotional videos, and the distribution of integrity-related reading books in communities,
schools, and enterprises (Zhang 2020). In Chinese cities, street posters can easily be found in
streets, parks, and bus stops, broadcasting the traditional virtue of trustworthiness, that can make
life better. Billboards have been erected in almost every Chinese community to publicize the
morality of trust and sincerity. To improve the effect of this propaganda, Credit China (an official
platform for the SCS) has called for more creative and attractive forms to publicize trust culture,

138
Here, I thank Robin Celikates for helping me clarify some crucial ideas of Habermas’s use of technocracy and
ideology.
139
Here is an official map of the trust ‘tour’ <https://www.creditchina.gov.cn/chengxinzhuti/>

140
such as incorporating trust education into rap videos or Peking Opera (Credit China 2021).140 This
widely disseminated propaganda seeks to cultivate a positive atmosphere of trust: “Everyone is
honest, everything is honest, and there is honesty everywhere” (Hangzhou Government 2018).141
The sole purpose of building such atmosphere is to foster public awareness of trust and instill a
moral sense of integrity into the everyday life of citizens.
To better understand China’s trust propaganda, it can be compared to a similar trust
education campaign in the United States around 1930s, which was also “a training of the mind and
a development of the soul” conducted to foster a morality of trust (Lauer 2017, 135). During the
1920s and 1930s, there was a national campaign of cultivating a credit culture in the United States.
The nationwide credit education tried to inculcate “a credit consciousness that changed the buying
habits” of customers, to “teach them to pay promptly” (ibid.). A national credit association leader
suggested characterizing prompt payers as ‘superior people’, whereas “the delinquent must
become infra dig, socially disapproved, tabooed” (ibid.). Lauer describes a typical campaign:

During 1927, hundreds of cities participated in a weeklong national campaign involving


some 2.6 million leaflets and tens of thousands of store displays, which together delivered
the message of credit responsibility to an estimated forty million people (Lauer 2017, 132).

This campaign managed to awaken consumers’ sense of shame by reminding them about credit’s
moral nature, in which ‘You Are Judged by Your Credit’ and should ‘Treat Your Credit As A
Sacred Trust’ (ibid., 135). During that time, an individual’s character of trust was so highlighted
that it was put “at the center of the national association’s official seal” (2017, 135). If people had
bad credit, it was not merely seen as an economic failure, but as the bankruptcy of their moral
character. As a Minneapolis leaflet warned that bad credit would haunt a person’s entire life: “Day
after day he and his family are shamed by the refusal of merchants to give him credit. Goods are
delivered to his house C.O.D., for the neighbors to whisper about.”142

140
See <http://credit.xlgl.gov.cn/news/MjIpLyw>
141
See <http://drc.hangzhou.gov.cn/art/2018/12/26/art_1568795_28466265.html>
142
C.O.D. means cash on delivery. “Minneapolis Credit Men Tell Benefits of Prompt Payments,” Credit World 9, no.
3 (November 1920): 29; Quoted from Lauer’s book Creditworthiness (2017), p.132.

141
This national trust campaign in the United States reminds us yet again that the Chinese
SCS is not fundamentally different from the credit system in the West, in that they both focus on
the morality of trust in credit scoring systems. Unlike trust education in the United States, however,
the Chinese social credit propaganda is much broader and deeper, since it is not only launched by
retailers and credit bureaus (as in the United States) but mainly by central and local governments,
which have a more powerful influence over citizens’ everyday life in China. This all-pervasive
state power can remodel citizens (not just consumers) by training their souls on a larger scale. Just
as John Gray points out,

[the] social credit system not only enforces norms and punishes violations of them, it also
teaches and instills them. The aim is to internalize them in population until they become
habitual. The purpose is not just to discipline the population but remodel it (Gray 2020).

In my view, trust propaganda functions as what Rainer Forst calls a ‘justificatory narrative’, a type
of storytelling that can be used to legitimize social orders. For instance, Donald Trump used the
historically saturated narrative of the ‘true American’ to “legitimize immigration bans, deportation
policies, and massive cuts to social welfare programs” (Hayward 2018, 58). When repeated, such
a narrative can be ‘condensed’ in a way that the narrative of power relations can be institutionalized
and objectified as the natural order, where individuals regard the power structure as natural,
necessary, and the only possibility (Taylor 2009, 52). When objectification happens, people accept
it without critical analysis. They would be internally normalized, or self-disciplined, ‘into a certain
frame of mind’ that makes them conform to the norms even without outside enforcement (Hayward
2018, 62). As Forst suggests, in a patriarchal social order, “women may conform to patriarchal
rules, even where the patriarch leaves things implicit or is absent” (Forst 2017, 45).
Again, what the SCS builds is not trust, but only algorithmic control. When the SCS
propaganda repeatedly narrates trust as a traditional virtue that is an intrinsically moral good,
algorithmic control – packaged in the form of ‘algorithmic trust’ – is immediately recognized as a
naturally good thing. In other words, algorithmic control is legitimized as a moral imperative,
perceived as an absolute truth, where no further justifications are needed. In this light, algorithmic
trust promotes an ideology that hides the operation of algorithmic control in the SCS. This ideology
may produce an effect of cognitive conditioning, which can undermine an individual’s cognitive

142
capacity for thinking of other possible alternatives. In other words, people can only recognize
algorithmic trust as a kind of ‘trust’ but may not figure out the power of algorithmic control is
clothed as ‘trust’.
This ideological effect may negatively shape people’s algorithmic imaginary and
undermine dialogical autonomy for open-minded and respectful exploration with others. As
mentioned in Chapters 2 and 3, the algorithmic imaginary is used to describe how people imagine
the inner workings of algorithms influencing their own behavior (Bucher 2017). The SCS is an
algorithmic system that serves to discipline citizen behavior through punishments and rewards.
However, under the ideology of algorithmic trust, algorithmic discipline is immediately seen as a
morally good thing without the need for further justification. People tend to more readily accept
how the SCS is tracking their social data, judging their social and political behavior, and punishing
their so-called untrustworthiness. As a result, people may blindly participate in the gamification
of the SCS. The incentive and sanction mechanisms of the SCS may possibly steer people to only
focus on improving their social scores, without much care for open-minded and respectful social
interactions useful in building trust with others.
When a person’s critical capacity is undermined in this manner, it can also reinforce a
culture of excessive punishment and objectification. In the name of moral goodness, people may
think that some harsh and even inhumane punishments of the untrustworthy are not only suitable,
but also necessary, because the untrustworthy is morally bad.

4.6 Thinking of Other Possibilities: The Value of Distrust

The ideology of algorithmic trust has the potential to impede our dialogical autonomy to openly
interact with others. But this ideology cannot be easily overcome, especially when it has become
a part of our prejudices or beliefs. To counter this ideology, we need to expose the underlying
justificatory narrative, and then think of other possibilities. As Forst (2017, 45, 49) suggests, the
dismantling of the dominance of structural power needs to offer alternative justificatory narratives
to challenge the existing ones. In this section, I will expose the underlying justificatory narrative
of trust propaganda in the SCS, and then provide some alternative narratives to undermine that
dominant one. It will be seen that the trust propaganda is mainly based on a dogmatic narrative of

143
trust built on traditional Chinese virtues, especially those found in Confucianism. I will try to
dismantle this dominant narrative of trust by exploring some alternative thinking, supported by the
input of Zhuangzi, as well as the idea of distrust in modern society.
One crucial feature of China’s trust propaganda in the SCS is the link between trust and
Chinese traditional virtues.143 The term of ‘traditional virtues’ is constantly referred to in officially
published documents and policies. The 2014 Outline repeatedly stresses that the critical function
of the SCS is to establish a culture of trust and of “promoting traditional virtues of honesty” (State
Council 2014).144 In Chinese trust propaganda, honesty and integrity are advertised as traditional
virtues in Chinese culture. 145 When publicizing traditional virtues, the SCS propaganda
deliberately selects a dogmatic understanding of trust; this is an understanding where trust is
described as the inner nature of being human, the morality of which remains unchallenged.
For example, the trust propaganda of the SCS often quotes a sentence from the Confucian
classic Zhong Yong (or The Doctrine of the Mean):

I. Sincerity is the way of Heaven. The attainment of sincerity is the way of men. 146

This Quote I illustrates that being trustworthy has been regarded as a pre-condition for being a
human. People are born to be sincere, whereas dishonesty runs contrary to human nature. So, being
honest and sincere is not externally imposed by others or by society, but is rather an internal
requirement as human beings. Moreover, the quote I also shows that trust, connected to the order
of Heaven, is even seen as a sacred morality. Chinese folk culture offers the following wisdom:

143
Trust propaganda also appeals to socialist values, such as ‘self-sacrifice and voluntarism’ inherited from the Mao
era (Zhang 2020, 579). But according to official explanations, traditional values also provide moral support for
socialist values (see <https://baijiahao.baidu.com/s?id=1723966637438203537&wfr=spider&for=pc>). Therefore,
the morality of trust is mainly derived from Chinese traditional virtues.
144
See < https://www.chinalawtranslate.com/en/socialcreditsystem/>
145
In Chinese ancient texts, there are many quotations about the idea of sincerity. See ‘sincerity’ (誠) in Chinese largest

dictionary Kangxi Dictionary (康熙字典) < https://ctext.org/dictionary.pl?if=en&char=%E8%AA%A0>


146
Zhongyong is seen as one of the most important texts of Confucianism. The full text can be accessed here:

<https://ctext.org/liji/zhong-yong/ens> The original Chinese text is: “诚者,天之道也;思诚者,⼈之道也”。

144
“Gods and ghosts only enjoy the food offered by honest people”.147 In other words, if people are
not honest, their offerings to gods and ancestors will be tainted.
Quote I is often cited in official SCS propaganda, to demonstrate that trustworthiness is
morally good in itself. For example, in the official Chinese website of SCS – Credit China, a
published propaganda article states: “Sincerity is the way of Heaven. The attainment of sincerity
is the way of men. Whenever the time flows, whatever the world changes… we should treat trust
and sincerity as precious as our life.”148 This narrative tries to show that trust is inherent to the
nature of being human, following the rule of heaven. In this way, trust is framed as a natural and
uncritical virtue. Hence, the SCS propaganda claims trust is built according to the following
justificatory narrative: In Chinese traditional culture, trust is seen as an inherently moral character,
so algorithmic trust is also inherently morally good, and the SCS is thereby justified because it
builds the algorithmic trust. I would argue that this narrative of trust as an inherently good example
of morality is unsound. For the remainder of this section, I will provide and discuss three
alternatives to the dominant narrative.
First, even in Confucianism, which is known for regulating society by appeal to virtue,
trust is not always seen as uncompromising. In contrast to Quote I, in The Analects of Confucius
(or Lunyu), there is another typical but subtly different point made about trust:

II. Trustworthiness is close to rightness – it ensures that people will live up to their word.149

Quote II, however, correlates a person’s trustworthiness with a social condition of justice, a term
more concerned with social equity and fairness. This means that trust does not prescribe “an

147
This folk idea is derived from the ancient Chinese book Shangshu (太甲下): “The spirits do not always accept the

sacrifices that are offered to them; they accept only the sacrifices of the sincere.” <https://ctext.org/liji/zhong-yong/ens>
148
Here is the original Chinese text: “诚者,天之道也;思诚者,⼈之道也。不管岁⽉如何变迁,环境如何变化…….应当珍视诚

信如⽣命。” Retrieved from

<https://www.creditchina.gov.cn/chengxinwenhua/chengxindajiatan/202203/t20220314_289584.html.>
149
See: The Analects of Confucius, translated by Burton Watson (2007, 17) The original Chinese text is “信近于义,⾔

可复也.”

145
absolute moral duty” for people to be trustworthy150 (Ni 2017, 91). Only when one’s promise is
oriented towards acts that are good for society, should the promise be kept. In other words,
trustworthiness is not a prior value nor an absolute truth. Instead, it needs to be justified, and must
be open to further reflection and negotiation so as to decide which acts are good and right. This
reflective attitude to trust is different from the trust propaganda associated with the SCS, where
trust is applauded as a universal good that neutralizes the need for justification.
The second alternative narrative comes from Daoism. Daoism and Confucianism are seen
as “the two great religious/philosophical systems of China” (Hansen 2003).151 But compared with
Confucianism, some ideas of Daoism even openly debunk the dogmatic virtue of trustworthiness.
In Zhuagnzi, one of the two most classic texts of Daoism, there is a well-known story about a man
called Wei Sheng, who is famous for his act of keeping promises:

Wei Sheng made an engagement to meet a girl under a bridge. The girl failed to appear and
the water began to rise, but instead of leaving, he wrapped his arms around the pillar of the
bridge and died152 (Zhuangzi 2013, 257).

In the story, to keep a promise is something worth of sacrificing one’s life for. This noble spirit
has long been held by Confucians and their followers as a classic example to show the absolute
morality of trust. Chinese SCS propaganda also repeatedly cites Wei Sheng as an ancient model
of trustworthiness.153 However, in the eyes of Zhuangzi, not only is Wei Sheng’s action of sacrifice

150
Notedly, the quoted sentence was not said directly by Confucius himself, but by one of his most distinguished
students: Youzi. In spite of this fact, that sentence reflects what Confucius thought. In a well-known recorded history,
Confucius and his students were detained by a group of people who plotted treason against the state of Wei. Those
traitors agreed to release them only if Confucius swore not to go to Wei (that is, not exposing their treason). Confucius
took the oath, but once released he went to Wei and told the king about those traitors. A student asked Confucius,
“How can you violate the oath you just swore?” Confucius replied: “God will not listen to the oath that is forced to
swear” (Sima Qian, Records of the Grand Historian or Shiji 史记). Therefore, for Confucius, people need not keep

their promises if they are not based on rightness, e.g., if people are forced to make a promise.
151
Retrieved from < https://plato.stanford.edu/entries/daoism/>
152
Alternatively, there is an available source of the full text online: the chapter “The Robber Zhi” in Zhuangzi,
translated by James Legge. Source: https://ctext.org/zhuangzi/robber-zhi, retrieved on February 11, 2022.
153
Many documents for trust propaganda treat Wei Sheng as a moral model. See

146
stupid and dangerous, but the value of trust itself is also open to skepticism. He is heavily critical
of that parable about Wei Sheng: “Wei Sheng died by drowning – trustworthiness was their curse”
(Zhuangzi 2013, 261).154 For Zhuangzi, Wei Sheng is a victim kidnapped by the dogmatic morality
of trust, bounded by undeniable duties which should outweigh everything, including his life.
Zhuangzi sees life as more fundamental than an abstract obligation of remaining trustworthy.
Moral trust should not override a human’s basic right to live. In this regard, trustworthiness is not
a categorical moral duty, but a normal trait that should be openly discussed and questioned.
The third narrative is the value of distrust. As was just shown, in some cases we should not
see trust as the only truth, or the only virtue, that we should stick to. At this point, I would like to
make an even bolder claim, namely that we sometimes even need to distrust others. Lacey
Davidson and Mark Satta argue that social distrust is justified for some socially disadvantaged
groups. They give the example of Black people in the United States, who live under “systemically
racist social and material conditions” (Davidson & Satta 2021, 123). They go on to explain their
position as follows:

In the weeks following the murder of George Floyd, the call to build trust between police
and communities reverberated on many news programs. Such narratives often suggest the
problem is that Black people don’t trust the police. These narratives imply that Black
people need to be more trusting and that Black people’s lack of trust is a root cause of
police violence (Davidson & Satta 2021, 123).

Davidson and Satta disapprove of this narrative, arguing that social trust can only be (re)built when
“our social norms must benefit people of all racial identities within the public”, instead of
“systematically [benefiting] some while harming others” (ibid.). Given the institutional racism
pervasive in American society, Black people’s distrust is justified.
This justified distrust can also be applied to China’s SCS project in two senses. First,
citizens are justified in holding proper social distrust, since China can in many respects be seen as
an authoritarian state that is likely to abuse its power. As mentioned above, some pilot cities have

<https://www.ccdi.gov.cn/toutiao/201808/t20180810_177507.html>
154
The original Chinese text is “尾⽣溺死,信之患也.” See Zhuangzi 2013, p.261.

147
planned to incorporate blood donation into the social credit system, in order to meet blood
shortages in hospitals. This policy may not only result in the inaction and laziness of health
authorities, but also hide the real issue behind the situation: a long-standing distrust of blood
donation. People are concerned that the blood donations may transmit HIV, and that their donated
blood may be secretly traded by hospitals for profit. These concerns are not totally groundless, as
some similar cases have already happened in real life (Yu et al. 2019).155 Against such background,
people’s distrust is not a negative power, but rather a transforming one, that pushes health
authorities to be more transparent and accountable. Only by doing this, rather than just transferring
the government’s responsibility to algorithmic enforcement, can the trust between the government
and citizens be rebuilt.
Second, there is an asymmetric power structure between citizens and surveillance
capitalists/states. As John Gray shows, mass surveillance is not restricted to capital accumulation
but “can also be used simply as a tool of state power” (Gray 2021).156 By capturing every detail of
individual behavior, the ruling government can predict, nudge and even manipulate their citizens’
behavior.157 China’s SCS project is potentially an example of such surveillance situation in which
the ruling state is able to accumulate unprecedented knowledge regarding their citizens, and thus
operate such unprecedented power to engineer trustworthy behavior. Under this asymmetric power
structure, the distrust of people towards algorithmic control is justified. This distrust does not exist
to undermine the trusting society, but to encourage the “task of creating social conditions that merit
social trust” (Davidson & Satta 2021, 123). This distrust is indispensable in making our
algorithmic society more trustworthy. 158

155
For example, a research study in 2019 found that between 2010 and 2017, due to the inadequate supervision, there
was an increasing number of cases of voluntary blood donors who were infected with HIV (Yu et al. 2019, 3431).
156
Retrieved from <https://www.noemamag.com/surveillance-capitalism-vs-the-surveillance-state/>
157
Gray argues that the West also uses mass surveillance to shape people’s behavior. However, unlike China, that neo-
paternalist nudge “is enforced not by the state but by civil society – companies, universities and media outlets” (Gray
2020). Ibid.
158
I thank Robin Celikates for providing me helpful comments on the value of distrust: “Habermas goes further and
argues that distrust vis-à-vis state institutions is essential in a democratic Rechtsstaat as well, but as it cannot be
institutionalized it has to express itself in other practices such as civil disobedience.” It is very interesting to relate
distrust with the idea of civil disobedience, which suggests that distrust can be justified in the sense that it can be seen
as a way of civil disobedience, which is necessary for a well-functioning democratic society.

148
4.7 Conclusion

This chapter has closely examined the phenomenon of engineering algorithmic trust in the Big
Data society. Trust is secured not by mutual interactions, but through the immediate enforcement
of punishment supported by automatic algorithms. Drawing on the framework of algorithmic
colonization, I regard this algorithmic trust as a colonization where algorithms hinder and distort
the interactive infrastructure of trust relations, restraining people’s capacity to develop trust
relations with others in a mutually interactive way. As has been argued, the algorithmic
disciplinary system tends to detach trust from real individuals, which has the potential to result in
a culture of objectification and excessive punishment. Furthermore, I have argued that the SCS
could produce a kind of social compliance, ensured by algorithmic control rather than ‘trust’ or
‘sincerity’ as is claimed. With trust propaganda in the SCS, Chinese authorities deliberately re-
moralize such algorithmic trust to hide the power of algorithmic control. Such an ideological effect
can possibly influence people’s algorithmic imaginary in a negative way, making them identify
themselves with the norms prescribed by the SCS, and restraining their thinking of other
possibilities.
What I would propose is to motivate a more interactive trust in our algorithmic society.
Zuboff warns us that, in the Big Data society, human experiences are reduced to an “infinite present
of mere behavior, in which there can be no subjects and no projects: only objects” (Zuboff 2019,
317, emphasis in original). Our interactive trust needs to be planted in human experience,
cultivating a culture of mutual understanding to counter the dogmatic nature of automated trust.

149
150
5 Conclusion: Towards an Open and Flexible Algorithmic Society159

5.1 A General Principle in Designing Algorithmic Systems

In the age of Big Data and AI, algorithms can be used to diagnose complex diseases, play Go,
drive cars, and make novel scientific discoveries, etc., – areas where it was supposed that
specifically human cognitive abilities are needed. Some also believe that algorithms can be very
successful in detecting emotions, writing poems, composing music, and engaging in love relations
(Castelo et al. 2019). We can imagine that ‘smart’ algorithms will be increasingly used to touch
some even deeper and more crucial areas of our lives in the near future, where our most intimate
and subjective traits will be networked to a continuous surveillance and process of conditioning.
This seemingly continual networking effect can pose the risk of increasingly reducing subjects’
self-determination to ‘tractable, predictable citizen-consumers’ subjected to a logic of generating
profits or social control (Cohen 2013, 1917). The question now arises: is there any space we human
beings should preserve from the relentless expansion of algorithmic power?
My answer is yes. Based on a critical theory of algorithmic colonization, I argue for the
necessity to guard some of our crucial values against being distorted or crowded out in the
lifeworld. Nonetheless, the notion of the lifeworld in my thesis does not commit us to a sort of
historical nostalgia for a non-algorithmic or pre-algorithmic age. Nor do I mean to follow an
approach of completely removing certain social or private spheres from the influence of algorithms.
In the Big Data era, where all data are extracted and analyzed, it is extremely hard (if not
impossible) for people’s social or private lives to be clearly walled off from algorithms, let alone
that algorithms can make our life more convenient and enjoyable.
Hence, the lifeworld is not a historical past nor a particular social realm (family, school,
etc.). Instead, in my thesis, the lifeworld provides a flexible space where humans’ capacity for
communicative action is located, people’s dialogical autonomy is possible, and people are able to

159
This chapter is adapted partly from my articles: 1) Transparency as Manipulation? Uncovering the Disciplinary
Power of Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25. https://doi.org/10.1007/s13347-022-
00564-w; 2) Wang, H. (2022). Algorithmic Colonization of Love: The Ethical Challenges of Dating App Algorithms
in the Age of AI. Techné: Research in Philosophy and Technology. [Accepted]

151
keep open for respectful interactions and explorations with others. This space ensures that social
justice, equality, freedom, and other social values can be realized and reproduced in our society
through dialogues, debates, negotiations, and some necessary forms of resistance. This space also
enables people to explore the meaning of intimacy and trust by interacting with others in a
broadminded and non-manipulated manner. So, to preserve the lifeworld is not to keep people
from the influence of a systemic regime, but to allow people to develop their capacity for open
interaction and exploration when encountering algorithmic systems.
In this light, my theoretical framework of algorithmic colonization does suggest a general
principle for building social-technical systems: when designing an algorithmic system and its
social components, we should leave a flexible space (the lifeworld) for people to develop critical
capabilities, to freely and respectfully express themselves and interact with others. Before
explaining this general principle more closely, let’s turn to the two cases of algorithmic love and
trust already discussed in the thesis to show why such a general principle is significant.

5.2 Objectification: Revisiting Algorithmic Love and Trust

As argued in the two case studies, the communicative nature of love and trust can be
algorithmically colonized in a way that love and trust are potentially detached from real individuals.
The consequence is that people simply treat each other as objects, numbers, and products, rather
than as people who need to be openly interacted with. Due to the priorities of automation and
decontextualization of algorithms, the mutually interactive parts of love and trust are relegated to
a unilateral execution and an automatic enforcement, which can contribute to more objectification
in our society.
In real-life dating, love relations are a flowing and dynamic process. A woman may not be
attracted to a plain-looking man, but via interaction, she may possibly explore some other more
attractive internal qualities such as being humorous or responsible. In contrast, online dating
algorithms often reduce humans’ complex romantic experiences and attractions into fixed and
static attributes, like appearance, credit score, blood type, or star sign. These separate attributes are
reduced and detached from real individuals and real human interaction, becoming a kind of ‘mesh’
or threshold which is used to categorically decide who is qualified to date and who should be

152
immediately rejected as unqualified. As a result, the mutually interactive quality of love is reduced
to a ‘unilateral execution’, the process of which is just like a manager selecting job candidates
from their resumes. With the help of AI, such a unilateral execution tends to become even more
automatic; some AI algorithms will not only matchmake potential daters, but also automatically
decide who is the best fit.
This automated unilateral execution is often decontextualizing and dehumanizing, a
process in which people who execute algorithms see others only as objects and products that can
be arbitrarily treated. These automated love relations, which do not need any meaningful mutual
interactions, dialogue or negotiation, have the potential to promote a culture of objectification.
Tinder’s swipe algorithm, for example, is designed in such a way that one’s beauty is almost the
only deciding factor: no beauty, no further communication. A plain-looking person will tend to be
immediately swiped left and rejected, and will not be given a chance to demonstrate her or his
other personalities and capacities. In the case of some credit scoring dating applications, love
decisions are mechanically translated into metric credit scores, and people with a low score will
get rejected at once, without even being asked if their low scores are caused by some uncontrolled
factors.
As for trust, it is inherently linked to risk and uncertainty, because the individuals involved
in trust relations often cannot confirm the character or intentions of their counterparts. As such,
trust is an interactive process, where people are willing to take on the burden of risk of being
betrayed. In the Big Data society, however, algorithms are used to establish a kind of automated
‘trust relations’ that does not need humans’ mutual interaction, but instead achieves its aims
through the automated enforcement of penalties (and incentives). This algorithmic trust assumes
that trustworthy individuals can be immediately rewarded with benefits, while the untrustworthy
individuals will be punished at once. This swift enforcement is guaranteed by massive data
tracking and Big Data analytics, which can instantly figure out and punish the ‘untrustworthy’.
Facial recognition technology can automatically recognize a person jaywalking, for instance,
which is then automatically recorded in a credit profile which triggers punishments automatically.
It is through such an algorithmic control and discipline that ‘trust’ is established.
This automated trust, however, is highly objectifying in the sense that algorithms do not
consider much about context and real people. They operate in a way that calculates and categorizes
people only based on an individual’s data, and enforces punishments or rewards in an automatic

153
manner. This dehumanizing feature of algorithmic trust may have the potential to contribute to a
culture of objectification and excessive punishment, since the algorithmic trust will bypass moral
agency and treat people only as instrumental objects to ensure reliance and compliance. This
bypassing of moral agency will make the people forget the experience (including suffering and
oppression) of real human beings, and be incorrectly used to justify some inhumane punishments.
As with the Chinese Social Credit System, which is designed in a harsh way so that ‘the
untrustworthy can’t move an inch’.160 Under such a principle, trustworthiness is seen as an absolute
good, whereas the untrustworthy are seen as absolutely and unforgivably bad. This can in turn be
used to justify some punishments such as public humiliation and shaming, which infringe upon
human dignity.

5.3 Three Proposals

What should we do if automated love and trust can drive out meaningful interaction with each
other and cause those insidious effects in the algorithmic society? A quick solution is to eradicate
such a system in the first place. This idea may sound too radical and impractical, but it is not
necessarily the case. In the European Union (EU), Article 5 (c) of the draft 2021 European Union
Artificial Intelligence Act (hereafter AIA) explicitly puts social scoring systems driven by AI
technologies on the prohibition list. 161 The regulation clearly explains why the social scoring
systems should be banned:

AI systems providing social scoring of natural persons for general purpose by public
authorities or on their behalf may lead to discriminatory outcomes and the exclusion of
certain groups. They may violate the right to dignity and non-discrimination and the values
of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural

160
State Council. Shehui Xinyong Tixi Jianshe Guihua Gangyao (2014-2020 nian) [Planning Outline for the
Construction of a Social Credit System (2014-2020).] 14 June 2014. Translation online:
<https://chinacopyrightandmedia.wordpress.com/2014/06/14/planning-outline-for-the-construction-of-a-social-
credit-system-2014-2020/>
161
The full text of the 2021 draft can be retrieved from <https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX:52021PC0206>

154
persons based on their social behaviour in multiple contexts or known or predicted personal
or personality characteristics. The social score obtained from such AI systems may lead to
the detrimental or unfavourable treatment of natural persons or whole groups thereof in
social contexts, which are unrelated to the context in which the data was originally
generated or collected or to a detrimental treatment that is disproportionate or unjustified
to the gravity of their social behaviour. Such AI systems should be therefore prohibited
(ibid. 21, emphasis added).

This statement of direct prohibition is effective and seems to solve the problem once and for all,
but yet this approach is often only applied to some extremely controversial algorithmic systems.
For AI matchmakers or AI love coaches, it will be too costly to ban them completely, as they may
not obviously infringe on one’s ‘right to dignity’ or ‘the values of equality and fairness’ as social
scoring algorithms do. Their popularity also means that many users can benefit from using those
technologies. Even for China’s SCS project, from the perspective of Chinese citizens, most do not
want to eliminate the system, because they believe such a system can be beneficial for the
development of social trust in the whole society (Kostka 2019, 1573). So, they support a kind of
social scoring system which is regulated by law and is not run randomly by the government for
use against citizens.
So, for most algorithmic systems, the question is not whether to eliminate them, but rather
how to design the systems in a way that they can incorporate more open and respectful interactions.
I will now spell out three general proposals that would facilitate such an incorporation.

5.3.1 Freedom to be Off

If some algorithmic systems can undermine humans’ free and respectful interactions, we should
allow people to opt-out if they wish to do so. This principle can be seen as the minimum
requirement when designing algorithmic systems. This approach does not mean eliminating the
algorithmic systems altogether, but rather ensuring that people can freely choose to opt-in or -out.
This idea is what Brett Frischmann and Evan Selinger (2018, 270) call ‘the freedom to be off’,
which means that people should be allowed to have “freedom from programming, conditioning,
and control engineered by others”.

155
A typical example is ‘the right to be forgotten’ enacted in EU’s data protection law. Article
17 of European Data Protection Regulation (GDPR) in its 2012 draft clearly explains the ‘right to
be forgotten and to erasure’.162 Internet users can choose to allow their data to be collected by
companies to get more efficient and personalized services, but if they do not want this to occur,
the law ensures that users have the right to get their personal data erased from Internet searches,
and their digital tracks will not be recorded by the online world.163 In doing so, people can self-
determine their online life to a greater degree, and not be stigmatized forever due to some specific
actions (crimes, scandals, revenge pornography, and so on) done in the past.164
This proposal is particularly relevant to China’s Social Credit System. The SCS is designed
in such a way that theoretically every citizen is required to be in the system, and no one should be
an exception.165 Only when everyone is involved and socially scored, can the system exercise its
greatest powers (of associated punishments, for instance). This design approach conflicts with
humans’ freedom to be off in algorithmic systems. When designing the SCS, it may be justifiable
for the government to encourage and convince as many citizens as possible to participate in the
system through trust propaganda. But for those who insist on not joining the system, the SCS
should respect their own choices and allow them to freely opt out of the system without the fear
of being punished.

5.3.2 Designing Flexibility

Protecting humans’ ‘freedom to be off’ is only a first step. A well-designed algorithmic system
should also allow people who participate in the system to have room for free and respectful

162
Source <https://ec.europa.eu/info/law/law-topic/data-protection_en>
163
The final version of GDPR provides the right to be forgotten in a more limited way than its 2012 draft. Data
subjects can request a removal of their personal information online only on certain grounds (Powles 2014).
Retrieved from <https://www.theguardian.com/technology/2014/may/21/what-did-the-media-miss-with-the-right-to-
be-forgotten-coverage>
164
The right to be forgotten is sometimes at odds with the freedom of expression. See the critical article written by
Stefan Kulk and Frederik Borgesius (2014).
165
In practice, till now, the SCS has not yet required every citizen to participate in the system, even if it encourages
them to do so.

156
interaction and exploration. The more room there is for such free interaction and exploration, the
more flexible the algorithmic systems will be. This principle is similar to Julie Cohen’s meaning
of “semantic discontinuity”. As she explains:

Semantic discontinuity is ‘the opposite of seamlessness: it is a function of interstitial


complexity within the institutional and technical frameworks that define information rights
and obligations and establish protocols for information collection, storage, processing, and
exchange.’ Semantic discontinuity helps to separate contexts from one another, thereby
preserving breathing room for personal boundary management and for the play of everyday
practice (Cohen 2013, 1931-1932, emphasis added).

For Cohen, such a semantic discontinuity not only represents the freedom to be off but also
provides a necessary room for people to ‘play’. The concept of play does not mean engaging in
games for fun, but rather refers to situated subjects’ everyday practices. These kinds of everyday
practices are playful because ‘situated subjects exercise a deliberate, playful agency’ to creatively
challenge ‘the institutional, cultural, and material constraints that people encounter’ (ibid. 1910).
Moreover, unexpected constraints can ‘open new pathways for emergent subjectivity to explore’
(ibid.) So, for Cohen, semantic discontinuity is crucial because it provides a space within
inconsistent systems for people to play – a way of ‘self-making’, ‘self-fulfillment’ and in a broader
sense, ‘human flourishing’ (ibid., 1911).
The concept of the lifeworld, as I have often pointed out, provides such a place for Cohen’s
notion of play. However, if Cohen’s meaning of play focuses on self-making, the lifeworld that I
introduce based on Habermas not only allows people to freely develop and express their identities,
but also leaves them room for mutual interactions to explore possibilities and pursue justice,
equality and other social values. So, according to the general principle drawing from the concept
of the lifeworld, I propose that algorithmic systems should be designed in a way that subjects can
engage in an open-minded environment, so as to freely and respectfully express themselves and
find the specific meaning of intimacy, trustworthiness, and living a good life. In short, algorithmic
systems should be designed in a more flexible way so that people can have enough space to play
and openly interact with others. This flexibility is not necessarily abstract, and here I would like
to specify some practical details.

157
In the case of algorithmic love, the design of matchmaking algorithms should encourage
more alternative possibilities for people to choose from, rather than on a single attribute such as
one’s credit score or physical appearance. What’s more, a mechanism of ‘speed bumps’ can be
designed into AI-powered dating systems to slow down users’ decision-making process, and to
make users think and deliberate more. For example, when a user swipes left to reject someone
based on the profile photo, the dating platform can pop up a dialogue window to ask the user
whether he or she is interested in reading more about the potential dater’s personalities and other
stories. In making the user stop for a while, this design may encourage users to consider another
person’s profile more fully, and to deliberate more carefully before rejecting someone.166 As for
AI love coaches, as shown in Chapter 3, algorithms should be designed in a way that allows users
to spend less time on following automated recommendations, but more on listening and
communicating with each other. AI love coaches should also design their automatic suggestions
in a more creative and diversified manner, incorporating different ideas of dating, such as hiking
dating or cooking dating, etc.
As for the case of algorithmic trust, people’s trustworthiness should not be decided by
single scores, and we should encourage more mutual understanding rather than the immediate
enforcement of punishments supported by automatic algorithms. Generally, some democratic
values should be designed into the SCS. For example, public online forums about the SCS can be
established to encourage citizens to openly discuss and share their concerns about the social
scoring systems. The norms of what is trustworthy or not should be widely checked, discussed and
challenged, in order to reach a shared understanding in a wider society, rather than being decided
on by governments or commercial entities alone. In doing so, the generation of a mutual
understanding is encouraged in a robust public sphere, and people’s trusting of others can be
related not through algorithmic control but via a common normative ground shared by all people.
Instead of transferring the responsibility to automatic enforcement, a healthy and vigorous civil
society is more fundamental in (re)building social trust.

166
This ‘speed bump’ may raise the concern that such design itself is a kind of manipulation – manipulation into
deliberation. But I use the notion of ‘manipulation’ in a morally objectionable way, in which the manipulators exploit
the manipulees’ vulnerabilities in order to benefit the manipulators. Here, to design a more deliberative environment
for dating apps is only to multiply users’ choices, making the dating systems more open and flexible.

158
More flexibility also means that the algorithmic systems should be designed less rigidly
and harshly. For scoring systems, it is often the fear and anxiety of punishments that prompts
subjects to discipline themselves and maximize their ratings. The more severe the punishments,
the more likely people are to subject themselves to the pressure of disciplinary power. Hence, we
need to consider reducing its disciplinary effects, and encourage people to spend more time on real
social interactions rather than only boosting their scores. The systems should be designed in a way
that can incorporate more understanding and less punishment into algorithmic systems. When
things go wrong, we may feel deeply betrayed, but we human beings often keep ascribing good
intentions to others, allowing people a second chance to prove themselves. In the Big Data era, we
might as well incorporate more of this sort of mutual understanding into our algorithmic society,
acknowledging the diversity and subjectivity, the inconsistencies and unpredictability inherent to
our human nature.
Last but not least, a flexible algorithmic system should allow some reasonable resistance.
I admit that social life would not be possible if lies and cheats were everywhere, and where such
behavior could easily avoid punishment. But it would be equally impossible for social life to exist
if it is just “making disobedience difficult or unattractive” (Rule 1974, 19). Automated trust tries
to eliminate the possibility of untrustworthiness through automatic punishments and incentives,
which aim to produce an automatic compliance – the technique of control by which people can
internally commit to the established norms and rules. As argued in Chapter 4, sometimes some
sort of distrust is not only valuable but also indispensable in making our algorithmic society more
trustworthy. For an algorithmic system, we need to allow some extent of distrust as resistance to
drive algorithms to be more responsible.

5.3.3 Transparency and Non-manipulation167

167
This section is partly adapted from the paper “Transparency as manipulation? Uncovering the disciplinary power
of algorithmic transparency (2022)” that I published in Philosophy & Technology. https://doi.org/10.1007/s13347-
022-00564-w

159
When we defer to algorithms to make choices for us in our everyday lives, we take the risk of
being manipulated by those algorithms. 168 Manipulation is morally objectionable because it
exploits individuals’ vulnerabilities and directs their behavior in ways that are likely to be to the
benefit of the manipulator. A typical example is that some companies attempt to detect and exploit
users’ emotionally vulnerable state to sell more products and services (Petropoulos 2022; Susser
et al. 2019). As argued in Chapter 3, this manipulation is wrong also because it has the potential
to distort the communicative infrastructure, undermining open and respectful interactions with
each other. So, my third proposal is to retain a non-manipulated environment.
Like governing other algorithmic systems, a set of policies are required to ensure that
algorithmic systems should not use manipulative strategies to undermine consumers’ interests.
Such regulations should make sure that their algorithms are transparent enough for democratic
examination. It seems natural to call for algorithmic transparency to counter manipulation, since
algorithms are often hidden from our sight, and can silently manipulate us. 169 Algorithmic
transparency has been so much acclaimed in the circles of academics, professionals, regulators,

168
It is true that a general notion of ‘manipulation’ is not always negative. Some may even argue that smiling or telling
a joke in a conversation can be seen as a form of manipulation (Momin 2021). However, I follow a critical
understanding of manipulation given by Susser et al. (2019, 3) which posits that manipulation works by “exploiting
the manipulee’s cognitive (or affective) weaknesses and vulnerabilities in order to steer his or her decision-making
process towards the manipulator’s ends.” In the context of algorithmic systems, algorithms are often designed and
deployed by commercial entities or governments whose main goals are to generate ever-larger profits or exert a power
of control. When users interact with algorithm-driven systems owned by companies, for example, there is often the
risk that companies can manipulate “users towards specific actions that are profitable for companies, even if they are
not users’ first-best choices” (Petropoulos 2022).
169
The term ‘algorithmic transparency’ is often defined as “the disclosure of information about algorithms to enable
monitoring, checking, criticism, or intervention by interested parties” (Diakopoulos & Koliska 2016, 3). To make an
automated system transparent, for example, the information about the inner workings of the algorithm should be
provided to relevant stakeholders. In this regard, the prime concern about transparency is the quantity and quality of
the disclosed information. For example, Daniel Susser proposes a ‘radical transparency’ to allow affected parties ‘a
view under the hood’ in order to disclose all relevant information (Susser 2019, 5). Paul de Laat proposes a ‘limited
transparency’ that algorithms deployers only “open up their models to the public and provide reasons for decisions
upon request” (De Laat 2019, 9). This common understanding of algorithmic transparency assumes an informational
account of transparency, which centers on the role of information, describing transparency as a matter of information
sharing and information disclosure. As I argue, such informational account of algorithmic transparency is not enough,
since it seldom questions the power structure beneath the informational disclosure of the algorithms.

160
and activists that to make algorithms transparent seems to be a universal good (Estop 2013; Weller
2017; Springer & Whittaker 2019; Zerilli & Maclaurin et al. 2019). The EU’s groundbreaking
General Data Protection Regulation (GDPR) even includes “a right to explanation” regarding the
inner working of the algorithmic decision-making process (Kaminski 2019; Rochel 2021).
Explainable (AI) algorithms are meaningful in making our algorithmic society more
democratic, and much ink has been spilled regarding the criteria of algorithmic transparency and
how to reveal the inner workings of an algorithm, etc. (von Eschenbach 2021; Jauernig, Uhl &
Walkowitz 2022). What I want to add to this picture is that we should not simply open the black
box of an algorithm without challenging the existing power relations.170 If an automated system
works under an asymmetrical power structure, the information disclosure itself can be a means of
manipulation used by a group of people to advance their own interests (Wang 2022).
Nowadays, we have seen many big technology companies (e.g. Google and Facebook)
establish their own projects (e.g. Explainable AI, the What If Tool) to make their advanced ML
(machine learning) algorithms more explainable and understandable (Tsamados et al. 2022, 219).
This seeming algorithmic transparency can make algorithms more explainable, but the
transparency of their AI algorithms does not fundamentally reduce the risk of AI manipulation
(such as the scandal of Cambridge Analytica; Hu 2020, 1). The general public depends on big
techs themselves to disclose their own AI algorithms, but their algorithmic transparency does not
change the existing power structure. As David Heald (2006, 35) shows, if corruption in society
goes on in spite of transparency, “knowledge from greater transparency may lead to more
cynicism”, which may lead to wider corruption. In this light, algorithmic transparency can reveal
the imbalance of power, but if the unequal power structure is deep-rooted and hard to change, then
more transparency may in turn reinforce the existing power asymmetry. The public becomes more
cynical about making algorithms transparent and may take the hidden manipulative practices for
granted.
Admittedly, it is often hard to challenge the asymmetrical power relations which are deeply
rooted in our scored society. Just as Citron and Pasquale show, in their discussion of the scored

170
Many scholars have noticed that only algorithmic transparency is not enough, and we need to consider the power
relations in making algorithms transparent (Matzner 2017; Ananny & Crawford 2018; De Laat 2018; Kemper &
Kolkman 2018; Powell 2021). In fact, algorithmic transparency is a social process intertwined with power relations,
which is much more complex than the solely informational disclosure (Weller 2017; Ananny & Crawford 2018).

161
society, “the realm of management and business more often features powerful entities who turn
individuals into ranked and rated objects” (Citron & Pasquale 2014, 3). Despite the difficulties,
we should still not give up challenging the unbalanced power structure and the overly centralized
power of big techs in order not to let algorithmic transparency itself become a manipulative
strategy.
A possible way to reform the power asymmetry in algorithmic systems could be to design
them in ways that can incorporate more critical agency. Instead of waiting for companies’ own
disclosure of their algorithms, we need to encourage more public discussion and education about
the manipulative risks of algorithmic systems, so that consumers are more reflective and critical
about the systems which shape the choices they make. This critical thinking in relation to the public
and consumers is sometimes more significant than simply opening the black box of algorithms,
because it has the potential to transform the asymmetrical power relation and make our algorithmic
society more fair, equal, and open-minded.

5.4 Conclusion

Overall, my theoretical framework of algorithmic colonization can provide an important design


principle for building algorithmic systems. In our algorithmic society, we should allow the
existence and the well-functioning of the lifeworld. As shown, the lifeworld is not a specific realm
or specific time in history, but a flexible space for communicative actions which have the
transformative power to make our society more open and participatory, as well as more just and
equal. Algorithmic systems should be designed in such a way that a flexible space can be well
preserved to make our algorithmic society more open. In such a new society, we are allowed to
have more sincere communication and mutual understanding of each other, instead of unnegotiated
standardization, compliance, and close-mindedness engineered by algorithmic power.

162
References

Adorno, T. W. (2005). Minima Moralia (E. Jephcott, Trans.). Verso. (Original work published
1951)
Adorno, T.W., & Horkheimer, M. (2016) Dialectic of Enlightenment (J. Cumming, Trans.). Verso.
(Original work published 1944)
Agrawal, A. (February 11, 2021). Exploring AI Revenue Generation Opportunities in Dating Apps.
Forbes. https://www.forbes.com/sites/forbestechcouncil/2021/02/11/exploring-ai-
revenue-generation-opportunities-in-dating-apps/?sh=2b42b42476f9
Ahmed, S. (2018). Credit Cities and the Limits of the Social Credit System. In AI, China, Russia,
and the Global Order. Wright, N.D. (editor) http://nsiteam.com/social/wp-
content/uploads/2019/01/AI-China-Russia-Global-WP_FINAL_forcopying_Edited-
EDITED.pdf#page=63
Albrechtslund, A. (2008). Online Social Networking as Participatory Surveillance. First Monday,
13(3). https://doi.org/10.5210/fm.v13i3.2142
Albrechtslund, A., & Dubbeld, L. (2005). The Plays and Arts of Surveillance: Studying
Surveillance as Entertainment. Surveillance & Society, 3(2/3): 216-221.
https://doi.org/10.24908/ss.v3i2/3.3502
Albu, O. B., & Flyverbom, M. (2016). Organizational Transparency: Conceptualizations,
Conditions, and Consequences. Business & Society, 58(2), 268-297.
Alkhaldi, N. (May 5, 2022) Emotional AI: Are Algorithms Smart Enough to Decipher Human
Emotions? https://www.iotforall.com/emotional-ai-are-algorithms-smart-enough-to-
decipher-human-emotions
Ananny, M., & Crawford, K. (2018). Seeing Without Knowing: Limitations of the Transparency
Ideal and Its Application to Algorithmic Accountability. New Media & Society, 20(3), 973-
989.
Anderson, J. (2019). Autonomy. In A. Allen (Ed.), The Cambridge Habermas Lexicon (pp 18-23).
Cambridge University Press.
Andrejevic, M. (2009). ISpy: Surveillance and Power in the Interactive Era. University Press of
Kansas.

163
Andrejevic, M. (2011). Surveillance and Alienation in the Online Economy. Surveillance &
Society, 8(3), 278-287. https://doi.org/10.24908/ss.v8i3.4164
Badiou, A., & Truong, N. (2012). In Praise of Love. New Press/ORIM.
Backer, L. C. (2018). Next Generation Law: Data-driven Governance and Accountability-based
Regulatory Systems in the West, and Social Credit Regimes in China. South California
Interdisciplinary law journal, 28(1), 123-172.
Banerjee, A. Duflo, E., Ghatak, M., & Lafortune, J. (2013). Marry for What? Caste and Mate
Selection in Modern India. American Economic Journal: Microeconomics, 5(2), 33–72.
DOI: 10.1257/mic.5.2.33
Bartik, A., & Nelson, S. (2016). Credit Reports as Resumes: The Incidence of Pre-employment
Credit Screening. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2759560
Baum, K., Mantel, S., Schmidt, E. et al. (2022). From Responsibility to Reason-Giving
Explainable Artificial Intelligence. Philos. Technol. 35(1), 1-30.
https://doi.org/10.1007/s13347-022-00510-w
Bauman, Z. (1989). Modernity and the Holocaust. Polity.
Bauman, Z. (2000). Liquid Modernity. Polity.
Bauman, Z. (2003). Liquid Love: On the Frailty of Human Bonds. Polity.
Bauman, Z. (2007). Consuming Life. Polity.
Bauman, Z. (2010). 44 Letters from the Liquid Modern World. Polity.
Bauman, Z. & Mazzeo, R. (2012). On Education. Cambridge: Polity.
Bauman, Z. & Lyon, D. (2013). Liquid Surveillance: A Conversation. Polity
Beer, D. (2017) The Social Power of Algorithms. Information, Communication & Society. 20(1):
1-13. https://doi.org/10.1080/1369118X.2016.1216147
Bellamy F. J. & McChesney, R.W. (2014, July 1). Surveillance Capitalism: Monopoly-Finance
Capital, the Military-Industrial Complex, and the Digital Age. Monthly Review, 66(3).
https://monthlyreview.org/2014/07/01/surveillance-capitalism/
Bentham, J., & Bozovic, M. (Ed.). (1995). The Panopticon Writings. Verso.
Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Polity Press.
Bevir, M. (1999). Foucault and Critique: Deploying Agency Against Autonomy. Political
Theory, 27(1), 65-84. https://www.jstor.org/stable/192161
Bijker, W. E. (1997). Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical

164
Change. MIT Press.
Binns, R. (2019). Algorithmic Accountability and Public Reason. Philosophy & Technology,
31(4),543-556. https://doi.org/10.1007/s13347-017-0263-5
Birhane, A. (2020). Algorithmic Colonization of Africa. SCRIPTed, 17(2), 389-409.
Black, J. S., & van Esch, P. (2020). AI-enabled Recruiting: What Is It and How Should a Manager
Use It? Business Horizons, 63(2), 215-226. https://doi.org/10.1016/j.bushor.2019.12.001
Blackwell, D. & Lichter, D. (2004). Homogamy Among Dating, Cohabiting, and Married
Couples. The Sociological Quarterly, 45(4), 719–737.
https://www.jstor.org/stable/4121207
Bogard, W. (1991) Discipline and Deterrence: Rethinking Foucault on the Question of Power in
Contemporary Society. The Social Science Journal, 28(3): 325-346.
https://doi.org/10.1016/0362-3319(91)90017-X
Bogard, W. (1996) The Simulation of Surveillance: Hypercontrol in Telematic Societies.
Cambridge University Press.
Bogard, W. (2012). Simulation and Post-panopticism. In K. Ball, K. Haggerty & D. Lyon
(Eds.), Routledge Handbook of Surveillance Studies (pp.30-37). Routledge.
Botsman, R. (2017). Who Can You Trust? How Technology Brought Us Together–and Why it
Could Drive Us Apart. Penguin UK.
Botsman, R. (2017, October 21). Big Data Meets Big Brother as China Moves to Rate Its
Citizens. Wired. https://www.wired.co.uk/article/chinese-government-social-credit-
score-privacy-invasion
Bowker, G. C., & Star, S. L. (2000). Sorting Things Out: Classification and Its Consequences.
MIT press.
Boyd, D. (2011). Dear Voyeur, Meet Flâneur...Sincerely, Social Media. Surveillance & Society,
8(4), 505–507. https://doi.org/10.24908/ss.v8i4.4187
Boyd, D. M., & Ellison, N. B. (2007). Social Network Sites: Definition, History, and
Scholarship. Journal of Computer-mediated Communication, 38(3), 16-31.
https://doi.org/10.1111/j.1083-6101.2007.00393.x
Boyne, R. (2000). Post-panopticism. Economy and Society, 29(2), 285-307.
https://doi.org/10.1080/030851400360505

165
Boyd, D., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a Cultural,
Technological, and Scholarly Phenomenon. Information, communication & society, 15(5),
662-679. https://doi.org/10.1080/1369118X.2012.678878
Bridle, J. (2014). The Algorithm Method: How Internet Dating Became Everyone’s Route to a
Perfect Love Match https://www.theguardian.com/lifeandstyle/2014/feb/09/match-
eharmony-algorithm-internet-dating
Bröckling, U., Krasmann, S., & Lemke, T. (2012). Governmentality: Current Issues and Future
Challenges. Routledge.
Bronowski, J. (2011). The Ascent of Man. Random House.
Broussard, M. (2020, September 8). When Algorithms Give Real Students Imaginary Grades. The
New York Times. https://www.nytimes.com/2020/09/08/opinion/international-
baccalaureate-algorithm-grades.html
Brown, C. (2008). Inequality, Consumer Credit and the Saving Puzzle. Edward Elgar Publishing.
Brown, W. (1995). States of Injury: Power and Freedom in Late Modernity. Princeton University
Press.
Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. Duke University Press.
Brunon-Ernst, A. (2013). Deconstructing Panopticism into the Plural Panopticons. In A. Brunon-
Ernst (Ed.), Beyond Foucault: New perspectives on Bentham’s panopticon (pp. 17–42).
Ashgate Publishing.
Bucher, T. (2018). If... Then: Algorithmic Power and Politics. Oxford University Press.
Bucher, T. (2017). The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook
Algorithms. Information, communication & society, 20(1), 30-44.
Burton, D. (2008). Credit and Consumer Society. Routledge.
Burton, D. (2012). Credit Scoring, Risk, and Consumer Lendingscapes in Emerging Markets.
Environment and Planning, 44(1), 111-124. https://doi.org/10.1068%2Fa44150
Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning
Algorithms. Big Data & Society, 3(1), 1-12.
https://doi.org/10.1177%2F2053951715622512
Calo, R. & Citron, D. K. (2021). The Automated Administrative State: A Crisis of
Legitimacy. Emory LJ, 70(4), 797-845.
https://scholarlycommons.law.emory.edu/elj/vol70/iss4/1/
Caluya, G. (2010). The Post-panoptic Society? Reassessing Foucault in Surveillance

166
Studies. Social Identities, 16(5), 621-633.
https://doi.org/10.1080/13504630.2010.509565
Calzati, S. (2021). Decolonising “Data Colonialism” Propositions for Investigating the Realpolitik
of Today’s Networked Ecology. Television & New Media, 22(8), 914-929.
https://doi.org/10.1177%2F1527476420957267
Caplan, R., & Boyd, D. (2016). Who Controls the Public Sphere in an Era of
Algorithms. Mediation, Automation, Power, 1-19. https://datasociety.net/wp-
content/uploads/2016/05/QuestionsAssumptions_background-primer_2016-1.pdf
Castelfranchi, C., & Falcone, R. (2010). Trust Theory: A Socio-cognitive and Computational
Model. John Wiley & Sons.
Celikates, R. (2006). From Critical Social Theory to a Social Theory of Critique: On the Critique
of Ideology After the Pragmatic Turn. Constellations, 13(1), 21-40.
Celikates, R. and Jaeggi, R. (2018) Technology and Reification: Technology and Science as
‘Ideology’ (1968). In Brunkhorst, H. et al. (Eds.) The Habermas handbook (pp. 256-
270). Columbia University Press.
Chan, N. K. (2019). The Rating Game: The Discipline of Uber’s User-Generated
Ratings. Surveillance & Society, 17(1/2), 183-190.
https://doi.org/10.24908/ss.v17i1/2.12911
Cheney-Lippold, J. (2017). Algorithms and the Making of Our Digital Selves. New York
University Press.
Chen, Q. (2020, December 24) Nine-year-old Found Liable by Court for Dead Father’s Debts and
Punished Under China’s Social Credit System When She Couldn’t Pay Up.
South China Morning Post. https://www.scmp.com/lifestyle/family-
relationships/article/3115133/nine-year-old-found-liable-court-dead-fathers-debts
Chen, Y. J., Lin, C. F., & Liu, H. W. (2018). Rule of Trust: The Power and Perils of China’s Social
Credit Megaproject. Colum. J. Asian L., 32(1): 1-36
Chiappori, P.A., Oreffice, S. & Quintana-Domeque, C. (2012). Fatter Attraction: Anthropometric
and Socioeconomic Matching on the Marriage Market. Journal of Political Economy,
120(4), 659–695. https://doi.org/10.1086/667941
Chin, C. & Robison, M. (2020). The Algorithm Method: How Internet Dating Became Everyone’s
Route to a Perfect Love Match to Consider the Privacy of Dating Apps

167
https://www.brookings.edu/blog/techtank/2020/11/20/this-cuffing-season-its-time-to-
consider-the-privacy-of-dating-apps/
Chiu, K. (2020, September 8). Suzhou City Takes a Page from China’s Social Credit System with
Civility Code That Rates Citizens’ Behavior Through a Smartphone App.
South China Morning Post. https://www.scmp.com/abacus/tech/article/3100516/suzhou-
city-takes-page-chinas-social-credit-system-civility-code-rates
Christman, J. (2004). Relational Autonomy, Liberal Individualism, and the Social Constitution of
Selves. Philosophical Studies, 117(1/2), 143-164. https://www.jstor.org/stable/4321441
Citron, D. K., & Pasquale, F. A. (2014). The Scored Society: Due Process for Automated
Predictions. Washington Law Review, 89(1), 1-34.
https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2
Clifford, R., & Shoag, D. (2016). “No More Credit Score”: Employer Credit Check Bans and
Signal Substitution. FRB of Boston Working Paper No. 16-10
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2836374
Cohen, J. E. (2012). Power/Play: Discussion of Configuring the Networked Self. Jerusalem. Rev.
Legal Studies., 6(1), 137-149.
https://scholarship.law.georgetown.edu/facpub/804
Cohen, J. E. (2013). What Privacy is for. Harvard Law Review, 126(7), 1904-1933.
https://harvardlawreview.org/2013/05/what-privacy-is-for/
Cohen, J.E. (2016). The Surveillance-Innovation Complex: The Irony of the Participatory Turn.
In D. Barney et al. (Eds.), The Participatory Condition. (pp.207-228). University of
Minnesota Press. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2466708
Cohen, J. L. (1995). Critical Social Theory and Feminist Critiques: The Debate with Jürgen
Habermas. In Meehan, J. (ed.) Feminists Read Habermas (pp. 57-90). Routledge.
Cook, K. S., & Santana, J. J. (2020). Trust: Perspectives in Sociology. In J. Simon (Ed.), The
Routledge Handbook of Trust and Philosophy (pp. 189-204). Routledge.
Copland, S. R. (2017). ‘Financially Sorted Kiwis’: A Critical Enquiry into Consumer Credit
Scoring Systems and Subjectivities in New Zealand (Doctoral dissertation). Retrieved
from Victoria University of Wellington Library: http://hdl.handle.net/10063/6488
Costa, S. (2005). Easy Loving: Romanticism and Consumerism in Late Modernity. Novos
Estudos-CEBRAP, 1(SE), 111-124.

168
http://socialsciences.scielo.org/scielo.php?script=sci_arttext&pid=S0101-
33002005000100005
Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the
Contemporary Subject. Television & New Media, 20(4), 336-349.
https://doi.org/10.1177%2F1527476418796632
Couldry, N. (2017, September 23) The Price of Connection: Surveillance Capitalism.
The Conversation. https://theconversation.com/the-price-of-connection-surveillance-
capitalism-64124
Courtois, C. & Timmermans, E. (2018). Cracking the Tinder Code: An Experience Sampling
Approach to the Dynamics and Impact of Platform Governing Algorithms. Journal of
Computer-Mediated Communication, 23(1), 1–16.
https://doi.org/10.1093/jcmc/zmx001.
Crawford, K. (2021). The Atlas of AI. Yale University Press.
“Credit”. https://www.etymonline.com/word/credit#etymonline_v_46419 Online Etymology
Dictionary. Retrieved 5 November 2019.
“Credit” in Cambridge dictionary: https://dictionary.cambridge.org/dictionary/english/credit
Creemers, R. (2018) China’s Social Credit System: An Evolving Practice of Control. Available at
SSRN: https://ssrn.com/abstract=3175792 or http://dx.doi.org/10.2139/ssrn.3175792
Crook, T. (2007). Power, Privacy and Pleasure: Liberalism and The Modern Cubicle. Cultural
studies, 21(4-5), 549-569. https://doi.org/10.1080/09502380701278954
Dai, X. (2018). Toward a Reputation State: The Social Credit System Project of China. Available
at SSRN: https://ssrn.com/abstract=3193577 or http://dx.doi.org/10.2139/ssrn.3193577
Dan, H. (2013, February 18) Trust Among Chinese ‘Drops to Record Low’.
China Daily https://www.chinadaily.com.cn/china/2013-02/18/content_16230755.htm
Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation.
Philosophy & Technology, 29(3): 245-268. https://doi.org/10.1007/s13347-015-0211-1
David, G., & Cambre, C. (2016). Screened Intimacies: Tinder and the Swipe Logic. Social Media
+ Society, 2(2), 1-11. https://doi.org/10.1177/2056305116641976
Davidson, L. J., & Satta, M. (2021). Justified Social Distrust. In K. Vallier & M. Weber
(Eds.), Social Trust (pp. 122-148). Routledge.

169
De Laat, P. B. (2019). The Disciplinary Power of Predictive Algorithms: A Foucauldian
Perspective. Ethics and Information Technology, 21(4), 319-329.
https://doi.org/10.1007/s10676-019-09509-y
De Laat, P. B. (2018). Algorithmic Decision-making Based on Machine Learning from Big Data:
Can Transparency Restore Accountability? Philosophy & technology, 31(4), 525-541.
https://doi.org/10.1007/s13347-017-0293-z
Delanty, G., & Harris, N. (2021). Critical Theory and The Question of Technology: The Frankfurt
School Revisited. Thesis Eleven, 166(1), 88-108.
https://doi.org/10.1177%2F07255136211002055
Deleuze, G. (1995) Negotiations:1972-1990. (Trans. Joughin, M.) Columbia University Press.
(Original work published 1990)
Deleuze, G. (1992). Postscript On the Societies of Control. October 59, 3-7.
http://www.jstor.org/stable/778828
Deleuze, G., & Guattari, F. (1988). A Thousand Plateaus: Capitalism and Schizophrenia (B.
Massumi, Trans.). University of Minnesota Press.
(Original work published 1980)
Deleuze, G. (1988). Foucault. University of Minnesota Press.
Dews, P. (Ed.) (1999). Habermas: A Critical Reader. Blackwell.
Dexe, J., Franke, U., Söderlund, K. et al. (2022). Explaining Automated Decision-making: A
Multinational Study of the GDPR Right to Meaningful Information. Geneva Pap Risk
Insur Issues Pract 47, 669–697. https://doi.org/10.1057/s41288-022-00271-9
Dholakia, N. & Zwick, D. (2001). Privacy and Consumer Agency in the Information Age: Between
Prying Profilers and Preening Webcams. Journal of Research for Consumers, 1,
available at http://www.
jrconsumers.com/academic_articles/issue_1/DholakiaZwick.pdf (accessed 01 April
2016).
Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic investigation of computational
power structures. Digital Journalism, 3(3), 398–415.
https://doi.org/10.1080/21670811.2014.976411

170
Diakopoulos, N. (2018) The Algorithms Beat, http://www.nickdiakopoulos.com/wp-
content/uploads/2018/04/Diakopoulos-The-Algorithms-Beat-DDJ-Handbook-
Preprint.pdf
Diakopoulos, N. (2020). Transparency. In M. Dubber, F. Pasquale & S. Das (Eds),
Oxford Handbook of Ethics and AI (pp.197-214). Oxford University Press.
Diakopoulos, N., & Koliska, M. (2016). Algorithmic Transparency in the News Media. Digital
Journalism, 5(7), 809-828. https://doi.org/10.1080/21670811.2016.1208053
Ding, X., & Zhong, D. Y. (2021). Rethinking China’s Social Credit System: A Long Road to
Establishing Trust in Chinese Society. Journal of Contemporary China, 30(130), 630-
644. https://doi.org/10.1080/10670564.2020.1852738
Discover & Match Media Group (2017) Online Daters Say a Good Credit Score is More Attractive
Than a Fancy Car
https://www.businesswire.com/news/home/20170821005049/en/Online-Daters-Say-a-
Good-Credit-Score-is-More-Attractive-Than-a-Fancy-Car
Dobbie, W., Goldsmith-Pinkham, P., Mahoney, N., & Song, J. (2016). Bad Credit, No problem?
Credit and Labor Market Consequences of Bad Credit Reports. The Journal of Finance,
75(5), 2377-2419. https://doi.org/10.1111/jofi.12954
Dokko, J., Li, G., & Hayes, J. (2015). Credit Scores and Committed Relationships. Available at
SSRN 2667158. https://www.brookings.edu/research/credit-scores-and-committed-
relationships/
Donzelot, J. (2008). Michel Foucault and Liberal Intelligence. Economy & Society, 37(1), 115-
134.
https://doi.org/10.1080/03085140701760908
Donnelly, D. (2021) An Introduction to the China Social Credit System. Retrieved from
https://nhglobalpartners.com/china-social-credit-system-explained/
Dorsey, D. (2010). Three Arguments for Perfectionism. Noûs, 44(1), 59-79.
https://doi.org/10.1111/j.1468-0068.2009.00731.x
Douglas, C.H. (1924) Social Credit. C. Palmer.
Drinhausen, K., & Brussee, V. (2021). China’s Social Credit System in 2021: From Fragmentation
Towards Integration, MERICS China Monitor, 12. https://merics.org/en/report/chinas-
social-credit-system-2021-fragmentation-towards-integration

171
Drucker, S. J., & Gumpert, G. (2007). Through the Looking Glass: Illusions of Transparency and
The Cult of Information. Journal of Management Development, 26, 493-498.
Dubbeld, L. (2005). The Role of Technology in Shaping CCTV Surveillance Practices.
Information, Communication & Society, 8(1), 84–100.
https://doi.org/10.1080/13691180500067142
DuFault, B. L., & Schouten, J. W. (2020). Self-quantification and The Datapreneurial Consumer
Identity. Consumption Markets & Culture, 23(3), 290-316.
https://doi.org/10.1080/10253866.2018.1519489
Ellerbrok, A. (2011). Playful Biometrics: Controversial Technology through the Lens of Play. The
Sociological Quarterly, 52(4), 528-547.
https://doi.org/10.1111/j.1533-8525.2011.01218.x
Ellul, J. (with Merton, R.K.). (1964). The Technological Society (J. Wilkinson, Trans.). Vintage.
(Original work published 1954)
Elmer, G. (2003). A Diagram of Panoptic Surveillance. New Media & Society, 5(2), 231–247.
https://doi.org/10.1177%2F1461444803005002005
Elmer, G. (2012) Panopticon—Discipline—Control. In K. Ball , K.D. Haggerty and & D. Lyon
(Eds.), Routledge Handbook of Surveillance Studies (pp.21-29). Routledge.
Epstein, A. (2015, August 5). Facebook’s New Patent Lets Lenders Reject a Loan Based on Your
Friends’ Credit Scores
Quartz. https://qz.com/472751/facebooks-new-patent-lets-lenders-reject-a-loan-based-
on-your-friends-credit-scores-but-dont-freak-out/
Erasmus, A., Brunet, T.D.P. & Fisher, E. (2021) What Is Interpretability? Philos.
Technol. 34, 833–862. https://doi.org/10.1007/s13347-020-00435-2
Estop, J. D. (2014). WikiLeaks: From Abbé Barruel to Jeremy Bentham and beyond: A Short
Introduction to the New Theories of Conspiracy and Transparency. Critical Methodologies,
14(1), 40-49.
Etzioni, A. (2010). Is Transparency the Best Disinfectant? Journal of Political Philosophy, 18,
389-404.
Eubanks, V. (2018) Automating Inequality: How High-tech Tools Profile, Police, and Punish the
Poor. St. Martin’s Press.

172
Evans-Pritchard, E. E. (1933) Zande Blood Brotherhood. Africa, 6(4): 369–401.
https://doi.org/10.2307/1155555
Farnden, J., Martini, B., & Choo, K. K. R. (2015). Privacy Risks in Mobile Dating Apps. arXiv
preprint arXiv:1505.02906.
Feenberg, A. (1995). Alternative Modernity. University of California Press.
Feenberg, A. (1996). Marcuse or Habermas: Two Critiques of Technology 1. Inquiry, 39(1), 45-
70. https://doi.org/10.1080/00201749608602407
Feenberg, A. (1999). Questioning Technology. Routledge.
Feenberg, A. (2001). Democratizing Technology: Interests, Codes, Rights. The Journal of Ethics,
5(2), 177-195. https://doi.org/10.1023/A:1011908323811
Feenberg, A. (2002). Transforming Technology. Oxford University Press.
Ferguson, E., Farell, K. & Lawrence, C. (2008). Blood Donation Is an Act of Benevolence Rather
Than Altruism. Health Psychology, 27(3), 327-336).
https://doi.apa.org/doi/10.1037/0278-6133.27.3.327
Ferreira, A., Oliveira, F. P., & von Schönfeld, K. C. (2022). Planning Cities Beyond Digital
Colonization? Insights from the Periphery. Land Use Policy, 114(2022), Article 105988.
https://doi.org/10.1016/j.landusepol.2022.105988
FICO Handbook. Retrieved 4 May 2020. https://www.myfico.com/credit-education-
static/doc/education/myFICO_UYFS_Booklet.pdf
Finkel, Eli. J., Eastwick, P., Karney, B.P. et al. (2012). Online Dating: A Critical Analysis from
the Perspective of Psychological Science. Psychological Science in the Public interest,
13(1), 3–66. https://doi.org/10.1177/1529100612436522
Flyverbom, M., Christensen, L. T., & Hansen, H. K. (2015). The Transparency–power Nexus:
Observational and Regularizing Control. Management Communication Quarterly, 29(3),
385-410.
Fottrell, Q. (2019, February 16). Your Partner’s Credit Score Could Reveal Red Flags That Have
Nothing to do with Money. Marketwatch: https://www.marketwatch.com/story/nearly-
40-of-americans-want-to-know-your-credit-score-before-dating-2016-05-03
Forst, R. (2012) The Tight to Justification, Columbia University Press.
Forst, R. (2017) Normativity and Power: Analyzing Social Orders of Justification (C. Cronin,
Trans.). Oxford University Press.

173
Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. (A. Sheridan, Trans.).
Vintage Books. (Original work published 1975)
Foucault, M. (1978), The History of Sexuality, Vol. 1 (A. Sheridan, Trans.). Allen Lane.
(Original work published 1976)
Foucault, M., & Gordon, C. (Ed.). (1980). Power/Knowledge: Selected Interviews and Other
Writings 1972/1977 (C. Gordon, Trans.). Harvester Press.
Foucault, M., 1994. Two Lectures. In M. Kelly (Ed.), Critique and Power: Recasting the
Foucault/Habermas debate (pp.17-46). MIT Press.
Franke, U. (2022). First-and Second-level Bias in Automated Decision-making. Philosophy &
Technology, 35(2), 1-20. https://doi.org/10.1007/s13347-022-00500-y
Frankel, R. (2017). Ready for Romance? A Low Credit Score Could Mean Dating Difficulties,
https://www.bankrate.com/personal-finance/credit/money-pulse-0517/
Franssen, M., Vermaas, P. E., Kroes, P., & Meijers, A. W. (Eds.). (2016). Philosophy of
Technology After the Empirical Turn. Springer International Publishing.
Fraser, N. (1985). What’s Critical about Critical Theory? The Case of Habermas and
Gender. New German Critique, 35(35), 97-131. https://doi.org/10.2307/488202
Fremstad, S., & Traub, A. (2011). Discrediting America: The Urgent Need to Reform the Nation’s
Credit Reporting Industry. Demos.
Frischmann, B., & Selinger, E. (2018). Re-engineering Humanity. Cambridge University Press.
Fuchs, C. (2017). Social Media: A Critical Introduction. Sage.
Fuchs, C. (2020). Communication and Capitalism. University of Westminster Press.
Fukuyama, F. (1996). Trust: The Social Virtues and the Creation of Prosperity. Simon and
Schuster.
Fromm, E. (2006) Man for Himself: An Inquiry into the Psychology of Ethics. Routledge. (Original
work published 1947).
Fromm, E. (1995). The Art of Loving. Thorsons.
Galič, M., Timan, T., & Koops, B.J. (2017). Bentham, Deleuze and Beyond: An Overview of
Surveillance Theories from the Panopticon to Participation. Philosophy & Technology,
30(1), 9-37. https://doi.org/10.1007/s13347-016-0219-1
Galloway, A. R. (2004). Protocol: How Control Exists After Decentralization. MIT Press.

174
Gandy Jr, O. H. (1993). The Panoptic Sort: A Political Economy of Personal Information (Critical
Studies in Communication and in the Cultural Industries). Westview Press, Inc.
Gane, N. (2012). The Governmentalities of Neoliberalism: Panopticism, Post-panopticism and
Beyond. The Sociological Review, 60(4), 611-634. https://doi.org/10.1111%2Fj.1467-
954X.2012.02126.x
Gellner, E. (1988) Plough, Sword, and Book: The Structure of Human History. University of
Chicago Press.
Gerber, D. A. (1982). Cutting out Shylock: Elite anti-Semitism and the Quest for Moral
Order in the Mid-nineteenth-century American Market Place. The Journal of
American history, 69(3), 615-637.
Ghosh, S. (April 14, 2017). Dating App Happn is Getting Paid Subscriptions and Use AI to
Recommend Matches. Businessinsider. https://www.businessinsider.com.au/happn-is-
getting-paid-subscriptions-and-will-use-ai-to-recommend-matches-2017-4?r=US&IR=T
Gilbert, A. S. (2018). Algorithmic Culture and the Colonization of life-worlds. Thesis
Eleven, 146(1), 87-96. https://doi.org/10.1177%2F0725513618776699
Graeber, D. (2011). Debt: The First 5,000 Years. Melville House.
Green, M. (2021). Risking Disclosure: Unruly Rhetorics and Queer (ing) HIV Risk
Communication on Grindr. Technical Communication Quarterly, 30(3), 271-284.
Habermas, J. (1970). Technology and Science as ‘Ideology’. In Toward a Rational Society
(J. Shapiro, Trans.) (pp. 81-127). Beacon Press.
(Original work published 1967)
Habermas, J. (1987). The Theory of Communicative Action, Vol 2: Lifeworld and Systems, A
Critique of Functionalist Reason (T. McCarthy, Trans.). Beacon Press
(Original work published 1981)
Habermas, J. (1990). Moral Consciousness and Communicative Action. MIT press.
Habermas, J. (1991) A reply (J. Gaines & D.L. Jones, Trans.). In A. Honneth and H. Joas
(Eds.), Communicative Action (214-164). Polity.
Habermas, J. (1991). The Structural Transformation of the Public Sphere: An Inquiry into a
Category of Bourgeois Society. MIT press.

175
Habermas, J. (1994). Some Questions Concerning the Theory of Power: Foucault Again. In M.
Kelly (Ed.) Critique and Power: Recasting the Foucault/Habermas Debate (pp.79–108).
MIT Press.
Habermas, J. (1996). Between Facts and Norms: Contributions to Discourse Theory of Law and
Democracy. (trans. by Rehg, W.). The MIT Press.
Habermas, J. (2001). Der interkulturelle Diskurs über menschenrechte—vermeintliche und
tats·chliche problem (C. Wei-dong, Trans.). Chinese version: “论⼈权的⽂化间性—假想的

问题和现实的问题”. Speech at Chinese Academy of Social Sciences.

Habermas, J. (2014). The Future of Human Nature. John Wiley & Sons.
Haggerty, K. D., & Ericson, R. V. (2000). The Surveillant Assemblage. The British journal of
sociology, 51(4), 605-622. https://doi.org/10.1080/00071310020015280
Handley, E. and Xiao, B. (2019, January 24). China Tests Opening Up Social Credit Scores to
Social Media Platform WeChat with Debt Map.
ABC News. https://www.abc.net.au/news/2019-01-24/new-wechat-app-maps-deadbeat-
debtors-in-china/10739016
Hansen, H. K., & Flyverbom, M. (2014). The Politics of Transparency and the Calibration of
Knowledge in the Digital Age. Organization, 22(6), 872–889.
https://doi.org/10.1177%2F1350508414522315
Hangzhou Government (2018). 杭州市首届诚信建设万里行主题宣传活动 (Hangzhou’s First
Integrity Construction Miles Theme Promotion Activity),
http://drc.hangzhou.gov.cn/art/2018/12/26/art_1568795_28466265.html
Harcourt, B. E. (2015). Exposed: Desire and Disobedience in the Digital Age. Harvard University
Press.
Hardin, R. (2002). Trust and Trustworthiness. Russell Sage Foundation.
Hayward, C. R. (2018). On Structural Power. Journal of Political Power, 11(1), 56-67.
https://doi.org/10.1080/2158379X.2018.1433756
Hayward, J. (2019, March 27). China’s Social Credit System Humiliates ‘Deadbeats’ with
Embarrassing Ringtone. https://www.breitbart.com/national-security/2019/03/27/chinas-
social-credit-system-humiliates-deadbeats-embarrassing-ringtone/

176
Haywood, C. (2018). Mobile Romance: Tinder and the Navigation of Masculinity. In C. Haywood
(Ed.), Men, Masculinity and Contemporary Dating (pp. 131–166). Palgrave Macmillan.
https://doi.org/10.1057/978-1-137-50683-2_5.
Heald, D. (2006). Transparency as an Instrumental Value. In C. Hood & D. Heald (Eds.),
Transparency: The Key to Better Governance? (pp. 59-73). Oxford University Press.
Heyman, R., & Pierson, J. (2015). Social Media, Delinguistification and Colonization of lifeworld:
Changing Faces of Facebook. Social Media+ Society, 1(2), 1-11.
https://doi.org/10.1177%2F2056305115621933
Hobbes, T. (1996). Leviathan. Oxford University Press.
Holton, R. (1994). Deciding to Trust, Coming to Believe. Australasian Journal of
Philosophy, 72(1), 63-76.
Homonoff, T., O’Brien, R., & Sussman, A. B. (2019). Does Knowing Your FICO Score Change
Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers (No.
w26048). National Bureau of Economic Research.
Honneth, A. (2001). Reconstructive Social Critique with a Genealogical Reservation: On the Idea
of Critique in the Frankfurt School. Graduate Faculty Philosophy Journal, 22(2), 3-12.
https://doi.org/10.5840/gfpj200122223
Honneth, A. (2007) Rejoinder. In D. Owen & B. van Brink (Eds.), Recognition and Power: Alex
Honneth and the Tradition of Critical Social Theory (pp204-226). Cambridge University
Press.
Honneth, A. (2008) Reification: A New Look at an Old Idea. Oxford University Press
Honneth, A. (2014) Freedom’s Right (J. Ganahl, Trans.), Polity.
Hurley, M., & Adebayo, J. (2016). Credit Scoring in the Era of Big Data. Yale JL & Tech., 18(148),
148-216.
Hvistendahl, M. (2017, December 14). Inside China’s Vast New Experiment in Social Ranking.
Wired. https://www.wired.com/story/age-of-social-credit/
Hu, M. (2020). Cambridge Analytica’s Black Box. Big Data & Society, 7(2), 1-6.
https://doi.org/10.1177/2053951720938091
Hunt, A. & Wickham, G., (1994), Foucault and Law. Pluto.
Hurka, T. (1993). Perfectionism. Oxford University Press.

177
Hurley, M., & Adebayo, J. (2017). Credit Scoring in the Era of Big Data. Yale JL & Tech., 18(148).
https://yjolt.org/credit-scoring-era-big-data
Hyman, L. 2011. Debtor Nation: A History of American in Red Ink. Princeton University Press.
Illouz, E. (1997). Consuming the Romantic Utopia. University of California Press.
Illouz, E. (2007). Cold Intimacies: The Making of Emotional Capitalism. Polity.
Ingram, D. (2005). Foucault and Habermas. In G. Gutting (Ed.), The Cambridge Companion to
Foucault (2nd revised edition)(pp. 240-83). Cambridge University Press.
Isenberg, B. (1991). Habermas on Foucault Critical Remarks. Acta Sociologica, 34(4), 299-308.
Iser, M. (2017). Colonization. In H. Brunkhorst, R. Kreide, & C. Lafont (Eds.), The Habermas
Handbook (Vol. 40) (pp. 494-498). Columbia University Press.
Jaeggi, R. (2009). Rethinking Ideology. In New Waves in Political Philosophy (pp. 63-86).
Palgrave Macmillan
Jauernig, J., Uhl, M. & Walkowitz, G. (2022) People Prefer Moral Discretion to Algorithms:
Algorithm Aversion Beyond Intransparency. Philos. Technol. 35, Article 2.
https://doi.org/10.1007/s13347-021-00495-y
Jay, M. (1986). In the Empire of the Gaze: Foucault and the Denigration of Vision in Twentieth-
Century French Thought. In D.C. Hoy (Ed.) Foucault: A Critical Reader (pp. 175-204).
Blackwell.
Joe (2016, December 3) Love is an Algorithm: How Human Emotion is Translated into Maths.
https://medium.com/@joemmackenzie/love-is-an-algorithm-cf764b1eeae4
Jones, K., (2019, September 18). The Game of Life: Visualizing China’s Social Credit System.
Visual Capitalist: https://www.visualcapitalist.com/the-game-of-life-visualizing-chinas-
social-credit-system/
Juan, D. (2019, August 02). Dog Walking Banned in Beijing City Parks
https://www.chinadailyhk.com/epaper/pubs//chinadaily/2019/08/02/05.pdf
Jütten, T. (2011). The Colonization Thesis: Habermas on Reification. International Journal of
Philosophical Studies, 19(5), 701-727. https://doi.org/10.1080/09672559.2011.629672
Kaminski, M. (2019) The Right to Explanation, Explained, Berkeley Technology Law Journal,
34(1),189-218. http://dx.doi.org/10.15779/Z38TD9N83H
Kane, C. L. (2014). Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics After
Code. University of Chicago Press.

178
Kelly, M. G. E. (2015). Discipline is Control: Foucault Contra Deleuze. New Formations A
Journal of Culture/theory/politics, 84, 148-162.
https://doi.org/10.398/NewF:84/85.07.2015
Kemper, J., & Kolkman, D. (2019). Transparent to Whom? No Algorithmic Accountability
Without a Critical Audience. Information, Communication & Society, 22(14), 2081-2096.
https://doi.org/10.1080/1369118X.2018.1477967
Kenzer, R. C. (1995). Credit Ratings of Georgia Black Businessmen, 1865-1880. The Georgia
Historical Quarterly, 79(2), 425-440.
Kerwin, C., Hurst, E. & Killewald, A. (2013). Marital Sorting and Parental Wealth. Demography,
50(1), 51–70. https://doi.org/10.1007/s13524-012-0144-6
Kim, K., & Moon, S. I. (2021). When Algorithmic Transparency Failed: Controversies Over
Algorithm-Driven Content Curation in the South Korean Digital Environment. American
Behavioral Scientist, 65(6), 847-862. https://doi.org/10.1177%2F0002764221989783
Kim, P. T. (2020). Manipulating Opportunity. Virginia Law Review, 106(4), 867-935.
https://www.virginialawreview.org/articles/manipulating-opportunity/
Kitchin, R. (2014). Big Data, New Epistemologies and Paradigm Shifts. Big data & society, 1(1),
1-12. https://doi.org/10.1177%2F2053951714528481
Kobie, N., (2019, June 7). The Complicated Truth about China’s Social Credit System.
Wired. https://www.wired.co.uk/article/china-social-credit-system-explained
Koehn, D. (2001). Confucian Trustworthiness and the Practice of Business in China. Business
Ethics Quarterly, 11(3), 415-429. https://doi.org/10.2307/3857847
Koskela, H. (2004). Webcams, TV Shows and Mobile Phones: Empowering Exhibitionism.
Surveillance and Society, 2(2/3), 199–215. https://doi.org/10.24908/ss.v2i2/3.3374
Kossow, N., Windwehr, S., & Jenkins, M. (2021). Algorithmic Transparency and Accountability.
Transparency International Anti-Corruption Helpdesk Answer
https://knowledgehub.transparency.org/assets/uploads/kproducts/Algorithmic-
Transparency_2021.pdf
Kostka, G. (2019). China’s Social Credit Systems and Public Opinion: Explaining High Levels of
Approval. New Media & Society, 21(7), 1565–1593.
https://doi.org/10.1177%2F1461444819826402

179
Kotliar, D. M. (2020). Data Orientalism: On the Algorithmic Construction of the non-Western
other. Theory and Society, 49(5), 919-939.
https://doi.org/10.1007/s11186-020-09404-2
Kottman, P. A. (2017). Love as Human Freedom. Stanford University Press.
Kozubek, J. (Sept. 24, 2014). Love Is Not Algorithmic. The Atlantic.
https://www.theatlantic.com/technology/archive/2014/09/love-is-not-
algorithmic/380688/
Krebs, L. M., Alvarado Rodriguez, O. L., Dewitte, P., Ausloos, J., Geerts, D., Naudts, L., &
Verbert, K. (2019, May). Tell Me What You Know: GDPR Implications on Designing
Transparency and Accountability for News Recommender Systems. In Extended Abstracts
of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-6
Kroes, P., & Meijers, A. (2001). The Empirical Turn in the Philosophy of Technology. Emerald
Publishing Limited.
Kroes, P., Franssen, M., Poel, I. V. D., & Ottens, M. (2006). Treating Socio‐technical Systems as
Engineering Systems: Some Conceptual Problems. Systems Research and Behavioral
Science, 23(6), 803-814. https://doi.org/10.1002/sres.703
Kroes, P. (2012). Technical Artifacts: Creations of Mind and Matter: A Philosophy of Engineering
Design (Vol. 6). Springer Science & Business Media.
Krippner, G. R. (2017). Democracy of Credit: Ownership and the Politics of Credit Access in Late
Twentieth-century America. American Journal of Sociology, 123(1): 1-47.
https://doi.org/10.1086/692274
Kulk, S., & Borgesius, F. Z. (2014). Google Spain v. González: Did the Court Forget about
Freedom of Expression? Case C-131/12 Google Spain SL and Google Inc. v. Agencia
Española de Protección de Datos and Mario Costeja González. European Journal of Risk
Regulation, 5(3), 389-398.
Kymlicka, W. (2002). Contemporary Political Philosophy. Oxford University Press.
Lahno, B. (2020). Trust and Emotion. In J. Simon (Ed.), The Routledge Handbook of Trust and
Philosophy (pp.147-159). Routledge.
Langenbucher, K. (2020). Responsible AI‐based Credit Scoring – A Legal Framework. European
Business Law Review, 31(4), 527–572.

180
Langley, P. (2014). Equipping Entrepreneurs: Consuming Credit and Credit Scores. Consumption
Markets & Culture, 17(5), 448-467. https://doi.org/10.1080/10253866.2013.849592
Lauer, J. (2017). Creditworthy: A History of Consumer Surveillance and Financial Identity in
America. Columbia University Press.
Lanzing, M. (2019). The Transparent Self: A Normative Investigation of Changing Selves and
Relationships in the Age of the Quantified Self. (PhD thesis)
Larsson, B., Letell, M., and Thörn, H. (Eds.). (2012). Transformations of the Swedish Welfare
State: From Social Engineering to Governance? Palgrave Macmillan.
Lazzarato, M. (2012). The Making of the Indebted Man: An Essay on the Neoliberal Condition
(J.D. Jordan, Trans.). Semiotext.
Lewis, D. (2019, October 11). All Carrots and No sticks: A Case Study on Social Credit Scores in
Xiamen and Fuzhou. Digital Asia Hub. https://www.digitalasiahub.org/2019/10/11/all-
carrots-and-no-sticks-a-case-study-on-social-credit-scores-in-xiamen-and-fuzhou/
Lee, J. Y. (2022). Dialogical Answerability and Autonomy Ascription. Hypatia, 37(1), 97-110.
Li, D.D, Chen, S., Dharmawan, K. (2019) China’s Most Advanced Big Brother Experiment Is a
Bureaucratic Mess. Bloomberg News Retrieved from
https://www.bloomberg.com/news/features/2019-06-18/china-social-credit-rating-flaws-
seen-in-suzhou-osmanthus-program
Liang, F., Das, V., & Kostyuk, N. (2018). Constructing a Data-driven Society: China’s Social
Credit System as a State Surveillance Infrastructure. Policy & Internet, 10(4), 415–453.
https://doi.org/10.1002/poi3.183
Li, H. (June 15, 2019). The Online Dating Industry Loves Artificial Intelligence. Syncedreview.
https://syncedreview.com/2019/06/15/the-online-dating-industry-loves-artificial-
intelligence/
Lilja, M., Baaz, M, and Vinthagen, S. (2013). Exploring ‘Irrational Resistance’. Journal of
Political Power, 6(2), 201–217. https://doi.org/10.1080/2158379X.2013.809212
Liu, C. (2019). Multiple Social Credit Systems in China. Economic Sociology: The European
Electronic Newsletter, 21(1), 22–32. https://dx.doi.org/10.2139/ssrn.3423057
Loick, D. (2014). Juridification and Politics: From the Dilemma of Juridification to the Paradoxes
of Rights. Philosophy & Social Criticism, 40(8), 757-778.
Luhmann, N. (1979). Trust and Power. John Wiley & Sons

181
Lupton, D. (2015). Digital Sociology. Routledge.
Lutz, C., & Ranzini, G. (2017). Where Dating Meets Data: Investigating Social and Institutional
Privacy Concerns on Tinder. Social Media+ Society, 3(1), 1-12.
Lyon, D. (Ed.). (2003). Surveillance as Social Sorting: Privacy, Risk, and Automated
Discrimination. Routledge.
Lyon, D. (Ed.). (2006). Theorising Surveillance: The Panopticon and Beyond. Willan Publishing.
Lyon, D. (2007). National ID Cards: Crime-control, Citizenship and Social Sorting. Policing: A
Journal of Policy and Practice, 1(1), 111-118.
Lyon, D., Ball, K., & Haggerty, K. D. (2012). Routledge Handbook of Surveillance Studies.
Routledge.
Lyon, D. (2017). Surveillance Culture: Engagement, Exposure, and Ethics in Digital
Modernity. International Journal of Communication, 11(2017), 824-842.
https://ijoc.org/index.php/ijoc/article/view/5527
Lyon, D. (2018). The Culture of Surveillance: Watching as a Way of Life. John Wiley & Sons.
Mac Síthigh, D., & Siems, M. (2019). The Chinese Social Credit System: A Model for Other
Countries? The Modern Law Review, 82(6), 1034-1071. https://doi.org/10.1111/1468-
2230.12462
Mackenzie, C. & N. Stoljar (eds.) (2000). Relational Autonomy: Feminist Perspectives on
Autonomy, Agency, and the Social Self. Oxford University Press.
MacLeod, C. & McArthur, V. (2019). The Construction of Gender in Dating Apps: An Interface
Analysis of Tinder and Bumble. Feminist Media Studies, 19(6), 822–840.
https://doi.org/10.1080/14680777.2018.1494618.
Madsen, R. (2003). Public Sphere, Civil Society and Ethic Community. In P.C.C. Huang (Ed.),
The Discussions on the Paradigms of China Study. Social Sciences Academic Press
Magalhães, J., & Couldry, N. (2021). Giving by Taking Away: Big tech, Data Colonialism and the
Reconfiguration of Social Good. International Journal of Communication, 15(2021),
343-362. https://ijoc.org/index.php/ijoc/article/view/15995
Manokha, I. (2018). Surveillance, Panopticism, and Self-discipline in the Digital
Age. Surveillance & Society, 16(2), 219-237. https://doi.org/10.24908/ss.v16i2.8346
Mann, S., Nolan, J., & Wellman, B. (2002). Sousveillance: Inventing and Using Wearable
Computing Devices to Challenge Surveillance. Surveillance & Society, 1(3), 331-355.

182
https://doi.org/10.24908/ss.v1i3.3344
Manokha, I. (2018). Surveillance, Panopticism, and Self-Discipline in the Digital
Age. Surveillance & Society, 16(2), 219-237. https://doi.org/10.24908/ss.v16i2.8346
Marcuse, H. (1964). One-Dimensional Man. Beacon Press.
Marcuse, H. (1974). Eros and Civilization: A Philosophical Inquiry into Freud. Beacon Press.
Marcuse, H. (2004). Some Social Implications of Modern Technology. In Technology, War and
Fascism (pp. 59-86). Routledge.
Marron, D. (2007) ‘Lending by Numbers’: Credit Scoring and the Constitution of Risk Within
American Consumer Credit, 36 Economy & Society, 36(1), 103-133.
https://doi.org/10.1080/03085140601089846
Marwick, A. (2012). The Public Domain: Surveillance in Everyday Life. Surveillance & Society,
9(4), 378–393. https://doi.org/10.24908/ss.v9i4.4342
Marx, K. (with Colletti, I.). (1975). Excerpts from James Mill’s Elements of Political Economy
(R. Livingstone & G. Benton, Trans.). In Karl Marx: Early Writings (pp.259-78). Vintage.
(Original work published in 1844)
Mathiesen, T. (1997). The Viewer Society: Michel Foucault’s ‘Panopticon’ Revisited. Theoretical
Criminology, 1(2), 215-234. https://doi.org/10.1177%2F1362480697001002003
Matsakis, L. (2019, July 29). How the West Got China's Social Credit System Wrong. Wired.
https://www.wired.com/story/china-social-credit-score-system/
Matzner, T. (2016). Beyond Data as Representation: The Performativity of Big Data in
Surveillance. Surveillance & Society, 14(2), 197-210.
Matzner, T. (2017). Opening Black Boxes Is Not Enough: Data-based Surveillance in Discipline
and Punish and Today. Foucault Studies, 23(2017): 27-45.
https://doi.org/10.22439/fs.v0i0.5340
Matzner, T. (2019). The Human is Dead–long Live the Algorithm! Human-algorithmic
Ensembles and Liberal Subjectivity. Theory, Culture & Society, 36(2), 123-144.
Matzner, T. (2022). Algorithms as Complementary Abstractions. New Media & Society, 1-17.
https://orcid.org/0000-0002-1439-2418
Mau, S. (2019). The Metric Society: On the Quantification of the Social. John Wiley & Sons.
Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How
We Live, Work, and Think. Houghton Mifflin Harcourt.

183
McCarthy, T. (1978). The Critical Theory of Jürgen Habermas. MIT Press.
McCarthy, T. (1998). Legitimacy and Diversity: Dialectical Reflections on Analytical Distinctions.
In M. Rosenfeld & A. Arato (Eds.), Habermas on Law and Democracy: Critical
Exchanges (pp.115-153).
McCarthy, T. (1991) Ideals and Illusions. MIT Press.
McGrath, J. E. (2004). Loving Big Brother: Performance, Privacy and Surveillance Space.
Routledge.
McKenzie, J. (2002). Perform or Else: From Discipline to Performance. Routledge.
McKay, C. (2020). Predicting Risk in Criminal Procedure: Actuarial Tools, Algorithms, AI and
Judicial Decision-making. Current Issues in Criminal Justice, 32(1), 22-39.
https://doi.org/10.1080/10345329.2019.1658694
McNay, L. (2009). Self as Enterprise: Dilemmas of Control and Resistance in Foucault’s The Birth
of Biopolitics. Theory, Culture & Society, 26(6), 55-77.
https://doi.org/10.1177%2F0263276409347697
Mejias, U. A., & Couldry, N. (2019). Datafication. Internet Policy Review, 8(4).
https://doi.org/10.14763/2019.4.1428
Meijer, A. (2013). Understanding the Complex Dynamics of Transparency. Public Administration
Review, 73(2013), 429-439. https://doi.org/10.1111/puar.12032
Meyer, R., (2015, September 25). Could a Bank Deny Your Loan Based on Your Facebook Friends?
The Atlantic. https://www.theatlantic.com/technology/archive/2015/09/facebooks-new-
patent-and-digital-redlining/407287/
Melendez, S. (2019, April 6). Now Wanted by Big Credit Bureaus like Equifax: Your Alternative
Data. Fastcompany. https://www.fastcompany.com/90318224/now-wanted-by-equifax-
and-other-credit-bureaus-your-alternative-data
Mellema, G. (1999). Adam B. Seligman, “The Problem of Trust” (Book Review). The Journal of
Value Inquiry, 33(2), 273-275.
Mende, M., Scott, M. L., Garvey, A. M., & Bolton, L. E. (2019). The Marketing of Love: How
Attachment Styles Affect Romantic Consumption Journeys. Journal of the Academy of
Marketing Science, 47(2), 255-273. https://doi.org/10.1007/s11747-018-0610-9
Mingran, T. (2016). The Problem of Confucian Moral Cultivation and Its Solution: Using Ritual
Propriety to Support Rule by Law. Frontiers of Philosophy in China, 11(1), 88-103.

184
https://doi.org/10.3868/s030-005-016-0007-6
Mitcham, C., & Mackey, R. (1971). Jacques Ellul and the Technological Society. Philosophy
Today, 15(2), 102-121. https://doi.org/10.5840/philtoday197115218
Mitcham, C. (1994). Thinking Through Technology: The Path between Engineering and
Philosophy. University of Chicago Press.
MIT Technology Review (2017, October) First Evidence That Online Dating Is Changing the
Nature of Society. https://www.technologyreview.com/2017/10/10/148701/first-evidence-
that-online-dating-is-changing-the-nature-of-society/
Mittelstadt, B., Russell, C., & Wachter, S. (2018). Explaining Explanations in AI. arXiv preprint
arXiv:1811.01439. https://doi.org/10.48550/arXiv.1811.01439
Modern Suzhou. (2016) If You Want to Know Whether a Suzhou Resident Has High “Moral
Character,” Swipe “Osmanthus Points”《现代苏州》想知道苏州⼈“⼈品”⾼低,刷刷“桂花分”.

Retrieved from https://www.sohu.com/a/117443710_349646


Momin, K. (July 9, 2021). Romantic Manipulation: 15 Things Disguised as Love. Bonobology.
https://www.bonobology.com/romantic-manipulation/
Monahan, T. (2006). Surveillance and Security: Technological Politics and Power in Everyday
Life. Routledge.
Monahan, T. (2011). Surveillance as Cultural Practice. The Sociological Quarterly, 52(4), 495-
508. https://doi.org/10.1111/j.1533-8525.2011.01216.x
Morrison, E. (2016). Discipline and Desire: Surveillance Technologies in Performance.
University of Michigan Press.
Mosco, V. (2014). To the Cloud: Big Data in a Turbulent World. Paradigm Publishers.
Mouffe, C. (2013). Agonistics: Thinking the World Politically. Verso.
Muldoon, J., & Raekstad, P. (2022). Algorithmic Domination in the Gig Economy. European
Journal of Political Theory, 14748851221082078.
https://doi.org/10.1177%2F14748851221082078
Muldrew, C. (1998). The Economy of Obligation: The Culture of Credit and Social Relations in
Early Modern England. Springer.
myFICO Handbook. Retrieved 4 May 2020. https://www.myfico.com/credit-education-
static/doc/education/myFICO_UYFS_Booklet.pdf
Nader, K. (2020). Dating through the Filters. Social Philosophy and Policy, 37(2), 237-248.

185
Newcomb, A. (Sept. 20, 2019). Tinder’s New ‘Swipe Night’ Turns Dating Into a Choose Your
Own Adventure Game. Fortune. https://fortune.com/2019/09/20/tinder-swipe-night-
dating-game/
Nietzsche, F. (with Smith, D.). (1996). On the Genealogy of Morals (D. Smith, Trans.). Oxford
University Press. (Original work published 1887)
Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life.
Stanford University Press.
Nys, T. R., & Engelen, B. (2017). Judging Nudging: Answering the Manipulation
Objection. Political Studies, 65(1), 199-214.
O’Connor, P. (2022). Coercive Visibility: Discipline in the Digital Public Arena. In P. O’Connor
& M.I. Benta (Eds.), The Technologisation of the Social: A Political Anthropology of the
Digital Machine (pp. 153-171). Routledge. https://doi.org/10.4324/9781003052678
Ohlberg, M., Ahmed, S., & Lang, B. (2017). Central Planning, Local Experiments: The Complex
Implementation of China’s Social Credit System. MERICS China Monitor, 43, 1–15.
https://merics.org/en/report/central-planning-local-experiments
O’Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. Broadway Books.
Origgi, G. (2020). Trust and Reputation. In J. Simon (Ed.), The Routledge Handbook of Trust and
Philosophy (pp. 88-96). Routledge.
Ortega-Esquembre, C. (2020). Social Pathologies and Ideologies in Light of Jürgen Habermas: A
New Interpretation of the Thesis of Colonisation. Humanities and Social Sciences
Communications, 7(1), 1-9.
https://doi.org/10.1057/s41599-020-00563-2
Ortega, J., & Hergovich, P. (2017). The Strength of Absent Ties: Social Integration Via Online
Dating. arXiv preprint arXiv:1709.10478. https://arxiv.org/pdf/1709.10478.pdf
Oshana, M. A. L. (1998). Personal Autonomy and Society. Journal of Social Philosophy, 29(1),
81-102. https://doi.org/10.1111/j.1467-9833.1998.tb00098.x
Packin, N. G., & Lev-Aretz, Y. (2016). On Social Credit and the Right to be Unnetworked.
Columbia Business Law Review, 2016(2), 339-425.
https://doi.org/10.7916/cblr.v2016i2.1739
Paine, R. (1970). Anthropological Approaches to Friendship. Humanitas 6(2): 139–60.

186
Pariser, E. (2011). The Filter Bubble: How the New Personalized Web is Changing What We Read
and How We Think. Penguin.
Parisi, L., & Comunello, F. (2020). Dating in the Time of “Relational Filter Bubbles”: Exploring
Imaginaries, Perceptions and Tactics of Italian Dating App Users. The Communication
Review, 23(1), 66-89. https://doi.org/10.1080/10714421.2019.1704111
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and
Information. Harvard University Press.
Patterson, O. (2018). Slavery and Social Death: A Comparative Study. Harvard University Press.
Pecora, V. P. (2002). The Culture of Surveillance. Qualitative Sociology, 25(3), 345-358.
https://doi.org/10.1023/A:1016081929646
Petersen, S. M. (2008). Loser Generated Content: From Participation to Exploitation. First
Monday, 13(3). https://doi.org/10.5210/fm.v13i3.2141
Petropoulos, G. (2022, February 2). The Dark Side of Artificial Intelligence: Manipulation of
Human Behaviour. Bruegel-Blogs. https://www.bruegel.org/blog-post/dark-side-artificial-
intelligence-manipulation-human-behaviour
Pooler, C. K. (Ed.). (1918). The Works of Shakespeare: Sonnets. Methuen
https://archive.org/details/sonnetseditedbyc00shakuoft/page/110/mode/2up
Poster, M. (1990). The Mode of Information: Poststructuralism and Social Context. University of
Chicago Press.
Powell, A. B. (2021). Explanations as Governance? Investigating Practices of Explanation in
Algorithmic System Design. European Journal of Communication, 36(4), 362-375.
https://doi.org/10.1177%2F02673231211028376
Powles, J. (21 May 2014). What Did the Media Miss With the 'Right to be Forgotten'
Coverage? The Guardian. https://www.theguardian.com/technology/2014/may/21/what-
did-the-media-miss-with-the-right-to-be-forgotten-coverage
Powles, J. (2016, May 2). Google and Microsoft Have Made a Pact to Protect Surveillance
Capitalism. The Guardian.
https://www.theguardian.com/technology/2016/may/02/google-microsoft-pact-
antitrust-surveillance-capitalism
Prat, A. (2005). The Wrong Kind of Transparency. American Economic Review, 95(3), 862-877.

187
Pronk, T. M., & Denissen, J. J. (2020). A Rejection Mind-set: Choice Overload in Online
Dating. Social Psychological and Personality Science, 11(3), 388-396.
Proposal for a Regulation of the European Parliament and of the Council on the protection of
individuals with regard to the processing of personal data and on the free movement of
such data (General Data Protection Regulation) (PDF), European Commission, 2012,
Article 17 ‘Right to be forgotten and to erasure’
Putnam, R.D. (1995). Bowling Alone: America’s Declining Social Capital. Journal of Democracy.
6(1): 65-78.
Reberkenny, J. (May 10, 2022). From 2017 to 2020 Grindr Sold User Data to the Highest Paying
Advertising Companies. Metroweekly. https://www.metroweekly.com/2022/05/even-
after-grindr-changed-its-data-policy-users-are-still-being-outed/
Regis, E. (2020, February 1). No One Can Explain Why Planes Stay in the Air: Do Recent
Explanations Solve the Mysteries of Aerodynamic Lift? Scientific American.
https://www.scientificamerican.com/article/no-one-can-explain-why-planes-stay-in-the-
air/
Reilly, J., Lyu, M.Y. & Robertson, M. (2021, March 30) China’s Social Credit System:
Speculation vs. Reality. The Diplomat. https://thediplomat.com/2021/03/chinas-social-
credit-system-speculation-vs-reality/
Rochel, J. (2021). Ethics in the GDPR: A Blueprint for Applied Legal Theory. International Data
Privacy Law. 11(2), 209-223.
Romele, A., Gallino, F., Emmenegger, C., & Gorgone, D. (2017). Panopticism is Not Enough:
Social Media as Technologies of Voluntary Servitude. Surveillance and Society. 15(2),
204-221. https://doi.org/10.24908/ss.v15i2.6021
Rosamond, E. (2016). All Data is Credit Data. Paragrana, 25(2), 112-124.
https://doi.org/10.1515/para-2016-0032
Rosenberg, S. (2006). Human Nature, Communication, and Culture: Rethinking Democratic
Deliberation in China and the West, In The Search for Deliberative Democracy in
China, pp. 77-111. Palgrave Macmillan.
Rosenfeld, M. J., Thomas, R. and Hausen, S. (2019). Disintermediating Your Friends: How Online
Dating in the United States Displaces Other Ways of Meeting. Proceedings of the
National Academy of Sciences, 116(36), 17753-17758.

188
Roessler, B. (2002). Problems with Autonomy. Hypatia, 17(4), 143-162.
https://doi.org/10.1111/j.1527-2001.2002.tb01077.x
Roessler, B. (2005). The Value of Privacy (V. Glasgow, Trans.). Polity.
(Original work published 2001)
Roessler, B., (2013). Kantian Autonomy and Its Social Preconditions on Axel Honneth’s Das
Recht Der Freiheit. Krisis, 2013(1), 14-17. http://www.krisis.eu/content/2013-1/krisis-
2013-1- 04-roessler.pdf
Roessler, B. (2015). Should Personal Data be a Tradable Good? On the Moral Limits of Markets
in Privacy. In B. Roessler & D. Mokrosinska (Eds.), Social Dimensions of Privacy:
Interdisciplinary Perspectives (pp.141-161). Cambridge University Press.
Roessler, B., & Mokrosinska, D. (Eds.). (2015). Social Dimensions of Privacy: Interdisciplinary
Perspectives. Cambridge University Press.
Roessler, B., (2015). Autonomy, Self-knowledge, and Oppression. Personal Autonomy and Social
Oppression: Philosophical Perspectives, Oshana, M.(ed.) pp.68-84.
Roessler, B. (2021). Autonomy: An Essay on the Life Well-lived. John Wiley & Sons.
Rule, J. B. (1974). Private Lives and Public Surveillance: Social Control in the Computer Age.
Schocken books.
Rudder, C. (2016) Inside OkCupid: The Math of Online Dating. YouTube.
https://www.youtube.com/watch?v=m9PiPlRuy6E
Sax, M. (2021). Optimization of What? For-profit Health Apps as Manipulative Digital
Environments. Ethics and Information Technology, 23(2021), 345-361.
https://doi.org/10.1007/s10676-020-09576-6.
Sax, M., Helberger, N., & Bol, N. (2018). Health as a Means Towards Profitable Ends: mHealth
Apps, User Autonomy, and Unfair Commercial Practices. Journal of Consumer Policy,
41(2018), 103-34.
https://doi.org/10.1007/s10603-018-9374-3
Schaefer, K. (2019) The Apps of China’s Social Credit System. Retrieved from
http://ub.triviumchina.com/2019/10/long-read-the-apps-of-chinas-social-credit-system/
Schaefer, M. et al. (2013). Communicative Versus Strategic Rationality: Habermas Theory of
Communicative Action and the Social Brain. Plos one, 8(5), 1-7.
https://doi.org/10.1371/journal.pone.0065111

189
Scholz, T. (Ed.). (2012). Digital Labor: The Internet as Playground and Factory. Routledge.
Schutz, A. & Luckmann, T. (1973). The Structures of the Lifeworld, Vol. 1 (J.T. Engelhardt Jr. &
R.M. Zaner, Trans.). Northwestern University Press.
Schwartz, P. and Velotta, N. (2018). Online Dating: Changing Intimacy One Swipe at a Time? In
J. Van Hook, S. M. McHale & V. King (Eds.), Families and Technology (pp. 57–88).
Springer International Publishing. https://doi.org/10.1007/978-3-319-95540-7_4.
Schwerzmann, K. (2021) Abolish! Against the Use of Risk Assessment Algorithms at Sentencing
in the U.S. Criminal Justice System. Philosophy & Technology 34(2021), 1883–1904.
https://doi.org/10.1007/s13347-021-00491-2
Seidel, J. (2015). The Game of Tinder: Gamification of Online Dating in the 21st Century. Medium.
https://medium.com/@jane_seidel/the-game-of-tinder-3c3ad575623f
Seligman, A. B. (1997). The Problem of Trust. Princeton University Press.
Silver-Greenberg, J. (2012, December 25). Perfect 10? Never Mind That. Ask Here for Her Credit
Score. The New York Times. https://www.nytimes.com/2012/12/26/business/even-cupid-
wants-to-know-your-credit-score.html
Simmel, G. (1978). The Philosophy of Money. Routledge & Kegan Paul.
Simon, B. (2002). The Return of Panopticism: Supervision, Subjection, and the New
Surveillance. Surveillance & Society, 3(1), 1-20. https://doi.org/10.24908/ss.v3i1.3317
Simon, J. (Ed.). (2020). The Routledge Handbook of Trust and Philosophy. Routledge.
Smith, A. (2018, August 30) Franken-algorithms: The Deadly Consequences of Unpredictable
Code. The Guardian. https://www.theguardian.com/technology/2018/aug/29/coding-
algorithms-frankenalgos-program-danger
Stacey, L., & Forbes, T. D. (2022). Feeling like a Fetish: Racialized Feelings, Fetishization, and
the Contours of Sexual Racism on Gay Dating apps. The Journal of Sex Research, 59(3),
372-384.
Stahl, B. C. (2007). Privacy and Security as Ideology. IEEE Technology and Society
Magazine, 26(1), 35-45. https://doi.org/10.1109/MTAS.2007.335570
Stahl, T. (2013). Habermas and the Project of Immanent Critique. Constellations, 20(4), 533-552.
https://doi.org/10.1111/1467-8675.12057
Stampler, L. (2014, 6 February). Inside Tinder: Meet the Guys Who Turned Dating in an Addiction.
Time Magazine: http://time.com/4837/tinder-meet- the-guys-who-turned-dating-into-
an-addiction/

190
State Council (2014). Planning Outline for the Establishment of a Social Credit System (2014-
2020) English translation: https://www.chinalawtranslate.com/en/socialcreditsystem/ 社会

信 ⽤ 体 系 建 设 规 划 纲 要 (2014—2020 年 ) Chinese version:

http://www.gov.cn/zhengce/content/2014-06/27/content_8913.htm
Steele, W.R. et al. (2007) Role of Altruistic Behavior, Empathetic Concern, and Social
Responsibility Motivation in Blood Donation Behavior. Transfusion. 48(2007), 43-54.
DOI: 10.1111/j.1537-2995.2007.01481.x
Sørum, H. & Presthus, W. (2021) Dude, Where’s My Data? The GDPR in Practice, From a
Consumer’s Point of View. Information Technology & People, 34(3): 912-
929. https://doi.org/10.1108/ITP-08-2019-0433
Stiglitz, J. & Weiss, A. (1988). Credit Rationing in Markets with Imperfect Information.
American Economic Review, 71(3), 393-410. https://www.jstor.org/stable/1802787
Strecker, D. (2009). The Theory of Society: The Theory of Communicative Action (1987): A
Classic of Social Theory. In H. Brunkhorst, R. Kreide & C. Lafont (Eds.), The Habermas
Handbook (pp. 360-382). Columbia University Press.
Storr, V. H., & Choi, G. S. (2019) Do Markets Corrupt Our Morals? Palgrave Macmillan.
Susser, D. (2019). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive
Choice Architectures. In Proceedings of the 2019 AAAI/ACM Conference on AI,
Ethics, and Society (pp. 403-408). https://doi.org/10.1145/3306618.3314286
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online Manipulation: Hidden Influences in a
Digital World. Georgetown Law Technology Review, 4(1), 1-45.
https://dx.doi.org/10.2139/ssrn.3306006
Susskind, J. (2018). Future Politics: Living Together in a World Transformed by Tech. Oxford
University Press.
Tan, M. R. (2016) The Problem of Confucian Moral Cultivation and Its Solution: Using Ritual
Propriety to Support Rule by Law. Frontiers of Philosophy in China, 11(1), 88-103.
https://doi.org/10.3868/s030-005-016-0007-6
Tao, L. (2018) Jaywalkers Under Surveillance in Shenzhen Soon to be Punished via Text Messages.
Retrieved from https://www.scmp.com/tech/china-tech/article/2138960/jaywalkers-
under-surveillance-shenzhen-soon-be-punished-
text?module=perpetual_scroll_0&pgtype=article&campaign=2138960

191
Taylor, C. (2004). Modern Social Imaginaries. Duke University Press.
Taylor, C. (2007). A Secular Age. Harvard University Press.
Taylor, D. (2009). Normativity and Normalization. Foucault Studies, 7(2009), 45-63.
https://doi.org/10.22439/fs.v0i7.2636
Timan, T., & Oudshoorn, N. (2012). Mobile Cameras as New Technologies of Surveillance? How
Citizens Experience the Use of Mobile Cameras in Public Nightscapes. Surveillance &
Society, 10(2), 167–181. https://doi.org/10.24908/ss.v10i2.4440
Thaler, R. H., & Sunstein, C. R. Nudge: Improving Decisions about Health, Wealth, and
Happiness. Yale University Press.
Thaler, R. H., Sunstein, C. R., & Balz, J. P. (2013). Choice Architecture. Princeton University
Press.
Thatcher, J., O’Sullivan, D., & Mahmoudi, D. (2016). Data Colonialism through Accumulation by
Dispossession: New Metaphors for Daily Data. Environment and Planning D: Society
and Space, 34(6), 990-1006. https://doi.org/10.1177%2F0263775816633195
Toledo, R. (2019, December 19). Should Doing Good Deeds Be Part of the Social Credit System?
Retrieved from http://www.bjreview.com/Opinion/201912/t20191209_800187137.html
Trainor, S. (2015, July 22). The Long, Twisted History of Your Credit Score. Time,
https://time.com/3961676/history-credit-scores/
Trivium China (September 23, 2019) Understanding China’s Social Credit System.
https://socialcredit.triviumchina.com/wp-content/uploads/2019/09/Understanding-
Chinas-Social-Credit-System-Trivium-China-20190923.pdf
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2022).
The Ethics of Algorithms: Key Problems and Solutions. AI & SOCIETY, 37(1), 215-230.
Tsosie, C. (2019, February 6). What Landlords Really Look for in a Credit Check. Nerdwallet.
https://www.nerdwallet.com/blog/finance/landlords-credit-check/
Tuffley, D. (2021, January 17) Love in the time of Algorithms: Would You Let Your artificial
Intelligence Choose your Partner? The Conversation. https://theconversation.com/love-in-
the-time-of-algorithms-would-you-let-artificial-intelligence-choose-your-partner-152817
Tufekci, Z. (2014). Engineering The Public: Big Data, Surveillance and Computational
Politics. First Monday, 19(7). https://doi.org/10.5210/fm.v19i7.4901
Vallier, K., & Weber, M. E. (Eds.). (2021) Social Trust. Routledge.

192
Van Dijck, J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford
University Press.
Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society: Public Values in a
Connective World. Oxford University Press.
Verbeek, P. P. (2011). Moralizing Technology: Understanding and Designing the Morality of
Things. University of Chicago press.
Wu, S. (2021, June) Machine Learning Data: Do You Really Have Rights to Use It?
https://www.airoboticslaw.com/blog/machine-learning-data-do-you-really-have-rights-
to-use-it
Varian, H. R. (2014). Beyond Big Data. Business Economics, 49(1), 27–31.
https://econpapers.repec.org/RePEc:pal:buseco:v:49:y:2014:i:1:p:27-31
von Blomberg, M. (2018) The Social Credit System and China’s Rule of Law. Everling, Oliver
(ed.) (2020) Social Credit Rating, Springer Gabler, Wiesbaden, pp. 111-137, Available
at SSRN: https://ssrn.com/abstract=3877623
von Eschenbach, W.J. (2021). Transparency and the Black Box Problem: Why We Do Not Trust
AI. Philosophy & Technology, 34(2021), 1607–1622. https://doi.org/10.1007/s13347-
021-00477-0
Wade, S. (2018). China’s Predictive Policing and ‘Digital Totalism’. Retrieved from
https://chinadigitaltimes.net/2018/03/xinjiangs-predictive-policing-chinas-digital-
totalitarianism/
Waitt, G. R. (2005). Doing Discourse Analysis. In I. Hay (Ed.), Qualitative Research Methods in
Human Geography (pp. 163-191). Oxford University Press.
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69), 1-25.
https://doi.org/10.1007/s13347-022-00564-w
Wang, Y. & Minzner, C. (2015) The Rise of the Chinese security state. The China Quarterly,
222(2015), 339-359. https://doi.org/10.1017/S0305741015000430
Wasilewski, K. (2020) When Technology Meets Ideology. First Monday. 25(8)
https://doi.org/10.5210/fm.v25i8.10817
Watson, B. (2007). The Analects of Confucius. Columbia University Press

193
Watson, D., Klohnen, E. C., Casillas, A. et al. (2004). Match Makers and Deal Breakers: Analyses
of Assortative Mating in Newlywed Couples. Journal of Personality, 72(5), 1029–1068.
https://psycnet.apa.org/doi/10.1111/j.0022-3506.2004.00289.x
Weiss, Y., & Willis, R. (1997). Match Quality, New Information, and Marital Dissolution.
Journal of Labor Economics, 15(1), S293–S329. http://www.jstor.org/stable/2535409
Weller, A. (2017). Challenges for Transparency in: ICML Workshop on Human Interpretability.
https://doi.org/10.48550/arXiv.1708.01870
Westphal, M. (1998). Commanded Love and Moral Autonomy. Kierkegaard Studies Yearbook,
1998, 1-22. https://doi.org/10.1515/9783110244007.1
Westlund, A. C. (2011). Autonomy, Authority, and Answerability. Jurisprudence 2 (1), 161–79.
Wilf, E. (2015, April 15). What World? Whose Algorithms? https://www.publicbooks.org/what-
world-whose-algorithms/
Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121-136.
http://www.jstor.org/stable/20024652
Wu, S. (2021, June 22) Machine Learning Data: Do You Really Have Rights to Use It?
https://www.airoboticslaw.com/blog/machine-learning-data-do-you-really-have-rights-
to-use-it
Xavier, M. A., Ferreira, F. A., & Esperança, J. P. (2021). An Intuition-based Evaluation
Framework for Social Credit Applications. Annals of Operations Research, 296(1),
571-590. DOI: 10.1007/s10479-018-2995-8
Xinhua. (2020, November 27). China Moves to Improve Social Credit System.
https://www.chinadailyhk.com/article/150673
Xu, X.V. and Xiao, B. (2018) China’s Social Credit System Seeks to Assign Citizens Scores,
Engineer Social Behaviour. Retrieved from https://www.abc.net.au/news/2018-03-
31/chinas-social-credit-system-punishes-untrustworthy-citizens/9596204
Yang, F. (2022). Habermas, Foucault and the Political-legal Discussions in China: A Discourse
on Law and Democracy. Springer Nature.
Yar, M. (2003) Panoptic Power and the Pathologisation of Vision: Critical Reflections on the
Foucaudian Thesis. Surveillance and Society, 1(3): 254-271.
https://doi.org/10.24908/ss.v1i3.3340
Yeung, K. (2017). ‘Hypernudge’: Big Data as a Mode of Regulation by Design. Information,

194
Communication & Society, 20(1), 118-136.
https://doi.org/10.1080/1369118X.2016.1186713
Yu, Y., Xu, J., & Li, M. (2019). Prevalence of HIV Infection Among Chinese Voluntary Blood
Donors During 2010‐2017: An Updated Systematic Review and Meta‐
analysis. Transfusion, 59(11), 3431-3441.
Zarsky, T. Z. (2016). The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine
Efficiency and Fairness in Automated and Opaque Decision Making. Science, Technology,
& Human Values, 41(1), 118-132. https://doi.org/10.1177%2F0162243915605575
Završnik, A. (2021). Algorithmic Justice: Algorithms and Big Data in Criminal Justice
Settings. European Journal of Criminology, 18(5), 623-642.
https://doi.org/10.1177%2F1477370819876762
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in Algorithmic and
Human Decision-Making: Is There a Double Standard? Philosophy &
Technology, 32(2019), 661–683. https://doi.org/10.1007/s13347-018-0330-6
Zhang, C. (2020). Governing (through) Trustworthiness: Technologies of Power and
Subjectification in China’s Social Credit System. Critical Asian Studies, 52(4), 565-588.
https://doi.org/10.1080/14672715.2020.1822194
Zhao, Y.R. (2017) The Courts’ Active Role in the Striving for Judicial Independence in China.
Frontiers L. China, 12(2), 278.
Zheng, Y.N. (2019, Dec. 17) How to Build a Society of Trust in China? Retrieved from
https://www.thinkchina.sg/how-build-society-trust-china
Zhou, C. and Xiao, B. (2020) China’s Social Credit System is Pegged to be Fully Operational by
2020 — but What will it Look Like? Retrieved from https://www.abc.net.au/news/2020-
01-02/china-social-credit-system-operational-by-2020/11764740
Zhuangzi (2013) The Complete Works of Zhuangzi (B. Watson, Trans.). Columbia University
Press.
Zou, S. (2021). Disenchanting Trust: Instrumental Reason, Algorithmic Governance, and
China’s Emerging Social Credit System. Media and Communication, 9(2), 140-149.
https://doi.org/10.17645/mac.v9i2.3806

195
Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information
Civilization. Journal of Information Technology, 30(1), 75–89.
https://doi.org/10.1057%2Fjit.2015.5
Zuboff, S. (2016, March 5). Google as a Fortune Teller: The Secrets of Surveillance Capitalism.
Frankfurter Allgemeine Zeitung. https://www.faz.net/aktuell/feuilleton/debatten/the-
digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for the Future at the New
Frontier of Power. Profile Books.
Zuboff, S. and Laidler, J. (2019). High Tech is Watching You. Retrieved from
https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-
capitalism-is-undermining-democracy/
Zureik, E., Stalker, L. H., Lyon, D., & Smith, E. (2010). Surveillance, Privacy and Globalization
of Personal Information. McGill–Queen’s University Press.

196
Summary

ALGORITHMIC COLONIZATION:
Automating Love and Trust in the Age of Big Data

Algorithms are playing increasingly crucial roles in mediating our daily lives. We often defer our
choices to algorithms to decide what news to read, movies to watch, and food to eat; and we
sometimes also ask algorithms to determine whom to trust, befriend and date. These algorithms
make our choices in an increasingly efficient and automatic manner: based on Big Data analytics
and AI, they can learn our habits and automatically recommend services that are seen as suitable
for us. On the one hand, these automatic algorithms can liberate us from some once ‘difficult’ parts
of our lives and make them become something that needs less time and thought. On the other hand,
such a tendency of ‘automating’ our everyday life and our social interactions may pose the risk of
losing our active agency and our meaningful interactions with others, reducing ourselves to
technical ‘cogs’ of the algorithmic systems. This dilemma is the central topic of my thesis.
To make an in-depth analysis of that dilemma, this thesis develops a critical theory
approach based on Habermas’s concept of the system and the lifeworld, and the colonization thesis,
to critically understand how algorithms constantly intrude into our lifeworld, and how the intrusion
brings out some big ethical challenges to our social relations and the development of subjectivity,
a ‘self’. The main point is that love and trust, involving a participatory stance toward the person,
are basically interactions shared with reasons, as well as with passion and emotion. However, I
argue that algorithmic systems have the potential to crowd out our free interactions and exploration
with others in love and trust relations, which may lead to a culture of objectification in the
algorithmic society. As a result, people are prompted to treat each other as objects, numbers, and
products that are manipulated for profit or control, rather than as equal individuals who need to be
openly and respectfully interacted with. Such objectification may potentially contribute to an
insidious effect of ignoring the suffering experience of real human beings and justify some
excessive punishments (e.g., the public shaming and humiliation in China’s SCS project) in the
algorithmic society.
The thesis is structured into two parts and has four chapters in total. Part I – the first two
chapters – establishes a theoretical framework of algorithmic colonization by incorporating a

197
reformulated version of Habermas’s colonization thesis with a critical theory of technology and
surveillance studies in digital societies. Part II is formed from the third and fourth chapters, which
apply the framework of algorithmic colonization to critically evaluate two real-world cases of
algorithmic systems: AI-powered matchmaking systems and China’s Social Credit System. In so
doing, I will explore how algorithms are shaping our most intimate experiences of love as well as
the crucial idea of trust in democratic societies.
Chapter 1 starts with an introduction of Habermas’s colonization thesis, describing how
the system mediated by money and power is different from the lifeworld – the regime that is driven
by mutual interactions. The important thing is that the system can develop its momentum to intrude
on the lifeworld, which is similar to a domino effect: if the media of money and power can
effectively drive the development of system, then such systematic mediation processes will create
a significant tendency to steer all activities, including the communicative actions in the lifeworld.
This constant intrusion of systematic logic into the lifeworld can lead to what Habermas calls the
system’s colonization of the lifeworld. Notedly, I do not follow Habermas’s historical notion of
the lifeworld, which has been widely criticized for assuming a romanticization of the historical
past. Instead, I take the lifeworld in a normative sense, as a potential for communicative actions
and open exploration of social values. In this light, the colonization of the lifeworld is wrong not
because a historical prior is infringed upon, but because such an open-minded and respectful
exploration in the lifeworld is threatened by the systematic logic of unnegotiated standardization
and objectification.
Based on this reformulation of Habermas’s colonization thesis, I rediscover a Habermasian
critique of technology. Unlike a common criticism that Habermas’s notion of technology is non-
social and essentialist, I argue that the colonization thesis can be seen as a critique of technological
rationality undermining the potential for open-minded interactions and explorations of social
values. I also point out that such a colonization thesis as a critique of technology has a limitation.
What the colonization thesis describes is how technological rationality erodes mutual
communications, which leads to a series of social pathologies. Yet it seems that Habermas does
not say much about how specific technologies shape humans’ communicative processes. To fully
develop the critical potential of the colonization thesis, I propose to incorporate surveillance
studies to see how particular technologies mediate the communicative actions of people’s open
interactions and exploration. To illustrate this point, I examine a classic example of surveillance –

198
Foucault’s disciplinary surveillance – to show how it is possible to incorporate surveillance studies
into the colonization thesis.
Chapter 2 shows that Foucault’s disciplinary theory itself has been challenged in digital
society. So, to apply the colonization thesis to the digital society, especially in a Big Data society,
we need to reformulate Foucault’s disciplinary theory to accommodate new pivotal situations. I
look at three typical arguments against such a Foucauldian disciplinary mode of surveillance: the
argument of fluid institutions, the argument of the data double, and the argument of risk-based
control. However, I provide three responses to show that those three arguments are not wholly
convincing. I insist that Foucault’s disciplinary theory is still significant in understanding digital
surveillance, even if it needs to change its form of ‘panopticism’ into a more flexible notion of
what I call ‘algorithmic discipline’ in the algorithmic society. This chapter hints that algorithmic
discipline can be used to hinder and distort open-minded and respectful human interactions, which
can result in the significant problem of ‘algorithmic colonization’.
In Chapter 3, I present a case of ‘smart’ matchmakers to show how algorithmic discipline
can colonize humans’ most intimate relations. I argue that the algorithmic colonization of love can
restrain people’s capacity for free and open exploration of intimate relations in cultural, social and
personal domains. In the cultural realm, AI dating algorithms closely surveil users’ everyday life
to learn and automatically decide so-called perfect match, which do not need any mutual
interactions, dialogue or negotiation. This automated love has the potential to promote a culture of
objectification, in which humans’ mutually interactive quality of love is relegated to an automated
sorting, selection and ‘unilateral execution’. The possible exploitation and manipulation of users’
intimate data can distort the interactive infrastructure utilized in building intimate relations with
others, which erodes people’s intimate interactions. As for personal identity, I depict how dating
algorithms tend to match people based on similarities, which may trap self-expression and
interactions with others in a filter bubble. Unconsciously, users are gradually confined to a
conditioned space where they passively follow their pre-existing preferences and are prevented
from actively and fully expressing their identities.
In Chapter 4, I examine another case of China’s Social Credit System (hereafter SCS) in a
broader context to show how algorithms colonize humans’ trust relations – a crucial element in
the lifeworld. I argue that China’s SCS project should be better seen as a way of engineering trust
through data and algorithms across the entire society, rather than as a mere instrument for

199
totalitarian surveillance as is commonly criticized by media coverage. Moreover, I show that such
an engineering of trust represents a new phenomenon of ‘automated trust’ in the Big Data era,
where trust relations are maintained not from mutual human interactions, but from automatic
tracking, prediction and penalties ensured by algorithmic control. This algorithmic trust is a kind
of algorithmic colonization of trust, where algorithms undermine the interactive infrastructure of
trust relations and restrain the ability to mutually interact with others while developing trust
relations.
Using the case of China’s SCS as an example, I examine how its credit scoring algorithm
tends to detach trust from real individuals, which can contribute to a culture of objectification and
excessive punishment. I also explain why the SCS does not establish a moral sense of ‘trust’ as it
claims, but only a sort of social compliance ensured by the algorithmic control. However, Chinese
authorities use nationwide trust propaganda to deliberately re-moralize such algorithmic trust,
which can hide the actual power operation of algorithms. Algorithmic trust is thereby bearing an
ideological effect in negatively shaping people’s algorithmic imaginary, and how people identify
themselves in the SCS. To reinvent more interactive and open-minded trust relations, I propose an
alternative narrative of distrust to challenge the official, dogmatic propaganda of trust as an
absolute moral good.
In the Conclusion, I suggest that my theoretical framework of algorithmic colonization can
be understood as a general design principle in engineering practices. The lifeworld is necessary
and pivotal, because it provides a flexible space where humans are capable of keeping open and
respectful interactions with others and can explore the possibilities of social justice and social
values together. So, algorithmic systems should be designed in a way that such a flexible space is
properly preserved to encourage more mutual understanding, diversity, and meaningful interaction
in our algorithmic society. The force driven by the technical imperatives of automation, efficiency
and profitability may “disregard social norms and nullify the elemental rights” required for people
to live a good life (Zuboff 2019, 18). An open-minded and respectful mutual interaction, however,
can help us rediscover the value of humanity, where people get in touch with real individuals, and
share sympathy and mutual understanding for the acts and sufferings of others.

200
Samenvatting

ALGORITMISCHE KOLONISATIE:
De Automatisering van Liefde en Vertrouwen in het Tijdperk van Big Data

Algoritmen spelen een steeds crucialere rol in het vormgeven en sturen van ons dagelijks leven.
Steeds vaker laten we een algoritme voor ons beslissen welk nieuws we lezen, welke films we
kijken en wat we eten; soms vragen we zelfs aan algoritmen om voor ons te bepalen wie we kunnen
vertrouwen, met wie we vriendschap moeten sluiten en wie we moeten daten. Door de tussenkomst
van zulke algoritmen maken we onze keuzes steeds efficiënter en steeds meer op welhaast
geautomatiseerde wijze: op basis van Big Data-analyse en AI kunnen onze gewoontes doorgrond
worden en kunnen diensten die als geschikt voor ons worden beschouwd automatisch worden
aanbevolen. Aan de ene kant kunnen deze automatisch werkende algoritmen ons taken uit handen
nemen die we als vervelend ervaren en waardoor we tijd en energie overhouden voor andere dingen.
Aan de andere kant kan een dergelijke neiging om ons dagelijks leven en onze sociale interacties
te ‘automatiseren’ het risico met zich meebrengen dat we onze actieve agency en onze
betekenisvolle interacties met anderen verliezen, waardoor we onszelf reduceren tot technische
‘radertjes’ in algoritmische systemen. Dit dilemma staat centraal in mijn proefschrift.
Om tot een diepgaande analyse van dit dilemma te komen, ontwikkelt dit proefschrift een
kritische theorie-benadering op basis van Habermas’ concepten van de systeem- en de leefwereld
en de kolonisatiethese, om kritisch te beschouwen hoe algoritmen onze leefwereld binnendringen
en hoe deze indringing grote ethische uitdagingen oplevert voor onze sociale relaties en de
ontwikkeling van subjectiviteit, een ‘zelf’. Het belangrijkste om hier op te merken, is dat liefdes-
en vertrouwensrelaties, waarbij sprake is van een participatieve houding ten opzichte van een ander
persoon, in feite interacties zijn waarbij zowel rationaliteit als passies en emoties een rol spelen.
Ik beargumenteer echter dat algoritmische systemen de neiging hebben om onze vrije interacties
en verkenningen met anderen in liefdes- en vertrouwensrelaties te verdringen, wat in de
algortmische samenleving kan leiden tot een cultuur van objectivering. Als gevolg hiervan worden
mensen ertoe aangezet elkaar te behandelen als objecten, getallen en producten die gemanipuleerd
kunnen worden voor winst of controle, in plaats van als gelijkwaardige individuen met wie op een
open en respectvolle wijze omgegaan moet worden. Zulke objectivering kan mogelijk bijdragen

201
aan een verraderlijk effect van het negeren van het lijden van echte mensen van vlees en bloed en
kan bovendien buitensporige straffen rechtvaardigen (bijvoorbeeld het publiekelijk te schande
maken en vernederen van mensen in het Chinese sociale-krediet-project) in de algoritmische
samenleving.
Het proefschrift is opgedeeld in twee delen en heeft in totaal vier hoofdstukken. Deel I –
bestaande uit de eerste twee hoofdstukken – beschrijft een theoretisch kader van algoritmische
kolonisatie dat bouwt op: een voor de digitale samenleving geherformuleerde versie van Habermas’
koloniseringsthese; op een kritische-theorie-perspectief op technologie; en op
surveillancestudiesperspectieven. Deel II – bestaande uit het derde en vierde hoofdstuk – past het
theoretische kader van algoritmische kolonisatie toe om twee echt bestaande toepassingen van
algoritmische systemen kritisch te evalueren: AI-aangedreven matchmaking-systemen en China’s
Social Credit System. Daarbij zal ik onderzoeken hoe algoritmen onze meest intieme ervaringen
van liefde vormgeven, evenals de cruciale rol die vertrouwen speelt democratische samenlevingen.
Hoofdstuk 1 begint met een introductie van Habermas’ koloniseringsthese, waarin wordt
beschreven hoe de systeemwereld die wordt gestructureerd door geld en macht verschilt van de
leefwereld – het regime dat wordt aangedreven door wederzijdse interacties. Een belangrijke
observatie is dat de systeemwereld genoeg momentum kan ontwikkelen – via een soort domino-
effect – om de leefwereld binnen te dringen: als de media van geld en macht de ontwikkeling van
de systeemwereld op effectieve wijze kunnen aansturen, dan zullen dergelijke systematische
bemiddelingsprocessen de sterke neiging hebben om álle activiteiten te sturen, inclusief het
communicatieve handelen in de leefwereld. Dit constante binnendringen van systematische logica
in de leefwereld kan leiden tot wat Habermas de kolonisatie van de leefwereld door het systeem
noemt. Hierbij moet wel worden opgemerkt dat ik niet Habermas’ historische opvatting van de
leefwereld overneem, omdat die alom bekritiseerd wordt als een overdreven romantisering van het
verleden. In plaats daarvan beschouw ik de leefwereld in normatieve zin, als een potentieel voor
communicatief handelen en een open verkenning van sociale waarden. In dit licht bezien is de
kolonisatie van de leefwereld niet verkeerd omdat een historisch ideaal wordt ondermijnd, maar
omdat onbevangen en respectvolle verkenningen in de leefwereld worden bedreigd door de
systematische logica van niet-onderhandelbare standaardisatie en objectivering.
Deze herformulering van Habermas’ koloniseringsthese stelt mij in staat om een
Habermasiaanse kritiek op technologie te herontdekken. In tegenstelling tot een veelgehoorde

202
kritiek dat Habermas’ notie van technologie niet-sociaal en essentialistisch is, betoog ik dat de
koloniseringsthese kan worden gezien als een kritiek op technologische rationaliteit die het
potentieel voor onbevangen interacties en verkenningen van sociale waarden ondermijnt. Ik wijs
er ook op dat de koloniseringsthese als basis voor technologiekritiek een belangrijke beperking
heeft. De koloniseringsthese beschrijft hoe technologische rationaliteit onderlinge communicatie
uitholt, wat leidt tot een reeks sociale pathologieën. Habermas lijkt niet veel te zeggen over de
manier waarop specifieke technologieën interpersoonlijke communicatieve processen beïnvloeden.
Om het kritische potentieel van de koloniseringsthese volledig te benutten, stel ik voor om
surveillancestudies bij de analyse te betrekken om zo te kunnen zien hoe bepaalde technologieën
de mogelijkheden tot communicatief handelen van mensen kunnen vormgeven. Om dit punt te
illustreren, onderzoek ik een klassiek voorbeeld van surveillance – Foucaults disciplinaire
surveillance – om te laten zien hoe het mogelijk is surveillancestudies op te nemen in de
koloniseringsthese.
Hoofdstuk 2 laat zien dat Foucaults theorie van disciplinering zélf op de proef is gesteld in
de digitale samenleving. Dus om de koloniseringsthese toe te kunnen passen op de digitale
samenleving – vooral in een Big Data-samenleving – moeten we Foucaults theorie van
disciplinering herformuleren om rekening te houden met nieuwe cruciale situaties die eigen zijn
aan de digitale samenleving. Ik kijk naar drie typische argumenten tegen een Foucauldiaans
perspectief op disciplinerende surveillance: het argument van vloeibare instituties, het argument
van de data double en het argument van controle op basis van risico-denken. Daarna bespreek ik
drie antwoorden op deze veel voorkomende argumenten tegen Foucaults, om te laten zien dat deze
tegenargumenten niet onoverkomelijk zijn. Ik benadruk dat Foucaults disciplineringstheorie nog
steeds van belang is voor het begrijpen van digitale surveillance, als we bereid zijn om Foucaults
‘panopticisme’ te vervangen voor een meer flexibele notie van wat ik ‘algoritmische discipline’
noem in de algoritmische samenleving. Dit hoofdstuk suggereert dat algoritmische discipline kan
worden gebruikt om onbevangen en respectvolle menselijke interacties te belemmeren en te
vervormen, wat kan leiden tot het grote probleem van ‘algoritmische kolonisatie’.
In hoofdstuk 3 bespreek ik de case van ‘slimme’ matchmakers om te laten zien hoe
algoritmische disciplinering de meest intieme relaties van mensen kan koloniseren. Ik
beargumenteer dat de algoritmische kolonisatie van de liefde het vermogen van mensen tot een
vrije en open verkenning van intieme relaties op cultureel, sociaal en persoonlijk gebied kan

203
ondermijnen. Op cultureel gebied houden AI-datingalgoritmen het dagelijkse leven van gebruikers
nauwlettend in de gaten om te leren en automatisch te beslissen over de zogenaamde perfecte
match, waarvoor geen wederzijdse interactie, dialoog of onderhandeling tussen mensen nodig is.
Zulke geautomatiseerde liefde heeft de potentie om een cultuur van objectivering te bevorderen,
waarin de interactieve kwaliteit van liefde tussen mensen wordt gedegradeerd tot geautomatiseerde
sortering, selectie en ‘unilaterale uitvoering’. De mogelijke uitbuiting en manipulatie van intieme
gegevens van gebruikers kan de interactieve infrastructuur die wordt gebruikt bij het opbouwen
van intieme relaties met anderen verstoren, waardoor de intieme interacties van mensen worden
uitgehold. Wat persoonlijke identiteit betreft, laat ik zien hoe datingalgoritmen de neiging hebben
om mensen te matchen op basis van overeenkomsten, waardoor zelfexpressie en interacties met
anderen in een filterbubbel kunnen worden gevangen. Gebruikers worden zo onbewust en
geleidelijk beperkt tot een geconditioneerde ruimte waarin ze passief hun reeds bestaande
voorkeuren volgen en verhinderd worden om hun eigen identiteit actief en volledig gestalte te
geven.
In hoofdstuk 4 onderzoek ik een andere case, namelijk China’s Social Credit System
(hierna SCS), om in een bredere context te laten zien hoe algoritmen de vertrouwensrelaties tussen
mensen – een cruciaal onderdeel van de leefwereld – koloniseren. Ik beargumenteer dat het
Chinese SCS-project beter gezien kan worden als een manier om door middel van algortimen
kunstmatig vertrouwen te creëren in de gehele samenleving, dan als louter een instrument van
totalitaire surveillance zoals het SCS-project vaak wordt bekritiseerd in de media. Bovendien laat
ik zien dat een dergelijke kunstmatige creatie van vertrouwen een nieuw fenomeen is dat we
‘geautomatiseerd vertrouwen’ in het Big Data-tijdperk kunnen noemen, een tijdperk waarin
vertrouwensrelaties niet worden onderhouden door wederzijdse menselijke interacties, maar door
algoritmische controle die bestaat uit het automatisch volgen, voorspellen en straffen van mensen.
Dit algoritmische vertrouwen is een soort algoritmische kolonisatie van vertrouwen, waarbij
algoritmen de interactieve infrastructuur van vertrouwensrelaties ondermijnen en de mogelijkheid
tot het ontwikkelen van vertrouwensrelaties via gezamelijk handelen aan banden leggen.
Voortbouwend op de case van het Chinese SCS, onderzoek ik hoe algoritmen voor
kredietscores vertrouwen kunnen losweken van echte mensen van vlees en bloed en als gevolg
waarvan een cultuur van objectivering en buitensporige straffen gestalte kan krijgen. Ook leg ik
uit waarom SCS geen ‘vertrouwen’ in de morele zin van het woord creëert zoals het wel beweert

204
te doen, maar alleen een soort sociale naleving die wordt gegarandeerd door de algoritmische
controle. De Chinese autoriteiten gebruiken echter landelijke vertrouwenspropagandacampagnes
in een poging om algoritmisch vertrouwen te ‘hermoraliseren’. Het gevolg hiervan is dat de
feitelijke machtswerking van algoritmen verder aan het zicht kan worden onttrokken. Hierdoor
draagt algoritmisch vertrouwen een ideoligsch effect in zich omdat het de verbeeldingskracht van
mensen tegenover algoritmen – alsook de mate waarin zij zich identificeren met het SCS – negatief
kan beïnvloeden. Om de mogelijkheid tot meer interactieve en ruimdenkende vertrouwensrelaties
opnieuw uit te vinden, stel ik een alternatief narratief over wantrouwen voor om de officiële,
dogmatische vertrouwenspropaganda van vertrouwen als een absoluut moreel goed te bevragen.
In de conclusie stel ik voor dat mijn theoretisch kader over algoritmische kolonisatie ook
kan worden begrepen als een meer algemeen ontwerpprincipe voor technologie-
ontwikkelingspraktijken. De leefwereld is noodzakelijk en cruciaal, omdat het een flexibele
publieke ruimte biedt waar mensen in staat worden gesteld om op een open en respectvol wijze
met anderen om te gaan om zo samen de mogelijkheden tot sociale rechtvaardigheid en sociale
waarden te kunnen verkennen. Algoritmische systemen moeten dus zo worden ontworpen dat een
dergelijke flexibele publieke ruimte behouden blijft om zo meer wederzijds begrip, diversiteit en
betekenisvolle interactie te bevorderen in onze algoritmische samenleving. De door technologie
opgelegde vereisten van automatisering, efficiëntie, en winstgevendheid genereren een kracht die
“[may] disregard social norms and nullify the elemental rights” die mensen nodig hebben om een
goed leven te leiden (Zuboff 2019, 18). Open en respectvolle interacties kunnen ons echter helpen
de waarde van medemenselijkheid te herontdekken; zodat mensen in contact komen met echte
individuen en sympathie en wederzijds begrip delen voor de daden en het lijden van anderen.

205
206
Author Contributions

Introduction
This chapter is based on three articles of mine:
Wang. H. Algorithmic Colonization: A New Critical Approach in Understanding the Algorithmic
Society (To be submitted)
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w
Wang, H. (2022). Algorithmic Colonization of Love: The Ethical Challenges of Dating App
Algorithms in the Age of AI. Techné: Research in Philosophy and Technology. [Accepted]

Chapter 1 Colonization of the Lifeworld: A Critical Theory Approach


This chapter is adapted from two articles of mine:
Wang, H. Technological Colonization: Rediscovering Habermas’s Critical Theory of Technology
(In preparation)
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w

Chapter 2 Disciplinary Power in the Algorithmic Society


This chapter is based on two articles of mine:
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w
Wang, H. Algorithmic Discipline: How Disciplinary Power Works in the Age of Big Data (To be
submitted)

Chapter 3 The Algorithmic Colonization of Love Life


This chapter is based on three articles of mine:

207
Wang, H. (2022). Algorithmic Colonization of Love: The Ethical Challenges of Dating App
Algorithms. Techné: Research in Philosophy and Technology. (Accepted subject to minor
revisions)
Wang, H. AI Manipulation of Love? Governing AI-driven Coaches in Surveillance Capitalism (In
preparation)
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w

Chapter 4 The Algorithmic Colonization of Trust


This chapter is based on three articles of mine:
Wang, H. Trust as an Ideology? Algorithmic Society and the Manipulative Nature of China’s
Social Credit System (To be submitted)
Wang, H. Automating Trust: How Algorithms Shape Trust Relations in the Big Data Society (To
be submitted)
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w

5 Conclusion: Towards an Open and Flexible Algorithmic Society


This chapter is partly based on my article:
Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of
Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w

208
List of Publications and Grant(s)

Peer-reviewed journal articles

Wang, H. (2022). Transparency as Manipulation? Uncovering the Disciplinary Power of


Algorithmic Transparency. Philosophy & Technology. 35(69): 1-25.
https://doi.org/10.1007/s13347-022-00564-w
Wang, H. (2022). Algorithmic Colonization of Love: The Ethical Challenges of Dating App
Algorithms in the Age of AI. Techné: Research in Philosophy and Technology. [Accepted]
Liu, Yongmou & Wang, H. (2018). Zhuangzi’s Ecological Politics: An Integration of Humanity,
Nature, and Power. Environmental Ethics, 40(1), 21-39.
Lim, D., & Wang, H. (2014). Can Mary’s Qualia Be Epiphenomenal? Res Philosophica, 91(3),
503-512.

Research Grant(s)

2023 – 2024:
I won a seed grant from Human(e) AI and Civic AI Lab with Marijn Sax from Law School at the
University of Amsterdam (UvA). The grant is supported by the UvA on the theme of ‘responsible
digital transformations’, one of the four central societal themes formulated in the UvA strategic
plan for 2021-2026. Our project title is “Transparent AI for Profit? Gaming AI Systems as a
Commercial Service”.

209
210
Acknowledgments

Six years ago, I thought doing a PhD would be a journey, an adventure, but when I lived and
breathed it, I found it is also a war, a battle. This is a fight against the time, the system, the
knowledge… and above all else, against myself. But thankfully, I survived and stayed sane:

I came… to write my acknowledgment now, the last words to my PhD life. After this, a new
chapter of my life is about to begin. I will soon call myself a Doctor (not that kind of…), with a
big smile and shiny teeth. Yes, I will have a (hopefully) fancier name: Dr Wang.

I saw… in the mirror that some streaks of my hair have survived, like some exhausted soldiers
who never surrender. When I was battling against my thesis, my hair was fighting against the
flying time. Luckily, my thesis ran a little bit faster than my falling hair.

I conquered… my thesis, a two-hundred-page book. I sometimes feel sorry about the woods and
grass that sacrifice themselves to become my book, because I am not sure whether my work has
contributed anything to society, or it simply throws one more rubbish into the ocean of academia.
If you have one of my books, my dear friend, it is not meant to spread knowledge. Winter is coming;
if the energy price is high, you could burn that book to keep yourself a little bit warm…

PhD may be the hardest battle I have ever engaged in. However, I was so fortunate because I had
never been fighting the war by myself. I THANK you all, my supervisors, colleagues, friends, and
family! Without you, I may have become bullet bait and fallen in the ash, alone.

My deepest gratitude goes to Beate Roessler, my supervisor as well as my friend. Thanks for your
unending support in making my research possible. Without your kind invitation, I may not be able
to come to Amsterdam in the first place, and may have missed so many beautiful stories happened
in this beautiful city. I also appreciate your honesty and quick response. When you found any
problems in my PhD, you were not hesitated to point them out and provided me with constructive
suggestions. I can imagine how difficult you have suffered through countless poor drafts of my
thesis. Many thanks that you did not give me up, and always chose to believe in my vision and
trust my personality, giving me a second chance. I was deeply moved when you felt unwell but
insisted on reading the whole chapters of my manuscript and giving me detailed comments.
Because of your trust and kindness, I will not let you down.

In my eyes, you are a real warrior who is fearless and always inspire others. You often encouraged
me to attend conferences, teach, accept interviews, get my ideas heard by others, organize meetings
and reading group, do those things that can challenge and improve myself. I am glad that I was not
shaking hands and legs in doing those challenging things. Socrates said, “Know yourself.” But you
gave me more, helping me not only know myself, but also transcend myself. Finishing my PhD is
just a new starting point, and I will have a long way to go, which is unpredictable and has countless
obstacles. But I will carry the spirit of a fearless warrior to fight against all the challenges ahead.
Thank you!

I am also profoundly grateful to my co-supervisors, Robin Celikates and Daniel Loick. Thank
you, Robin. You are such a kind and easygoing person. Whenever I met you up, I could always

211
be as relaxed as grabbing a beer in Café de Jaren. You are a quick thinker, and I was always
wondering how you can think so fast and clearly, and can always propose good questions. I thank
you that, in the first two years of my PhD, you guided me to read Habermas’s difficult books, and
encouraged me to join the Critical Theory Summer School at the Humboldt University in Berlin.
I had never expected that a seed of critical theory has since been planted in my heart, and some
ideas called “algorithmic colonization” would start to grow in my mind for the coming years. I
also thank you, Daniel. You only guided my research in the last two years of my PhD, but your
short-time guidance has largely transformed my research to a higher level. Your detailed
comments and suggestions enabled substantial revisions and improvement of my thesis. Thank
you so much!

I want to thank the members of my doctoral committee for agreeing to read and evaluate my work:
Yolande Jansen, Huub Dijstelbloom, Tobias Matzner, Thomas Nys, and Eva Groen-Reijman.
I can imagine how hard it would be to read through my thesis and assess its quality. Thanks for all
your time and energy. Especially for Huub and Thomas, thank you both supporting me for my first
grant application. I finally got the grant, and that’s fantastic!

I’d like to give a big shout-out to my two Paranymphs: Gerrit and Marijn! Gerrit, it’s so lovely
to have you in my PhD life! When I talked with you, I found that time was always not enough. It’s
always fascinating and relaxing to share with you, and you are not only smart and knowledgeable,
but also so good at storytelling. I can always acquire some amazing thoughts and funny anecdotes
from you. I really hope that we could have had more time sitting in Espressobar Puccini, and just
kept talking and sharing for the whole afternoon! Yes, we should have met up more often. More
than a friend, I have to admit, you are almost my “fourth supervisor”. My two journal articles
would not have been published that easily, if you have not kindly helped me edit the language. It’s
beyond my words to describe how good and reliable you are, and how much you have helped me.
I was extremely fortunate to have a friend like you in Amsterdam. Thanks also to your friends,
Barry and Daniel (Green), who kindly helped me improve my English writing and speaking!

Marijn, thanks for agreeing to support my defense, and helping me translate the Summary of my
thesis into an excellent Dutch version (Samenvatting). You were the first colleague I knew well at
the University of Amsterdam. The memory of our first meeting is still fresh in my mind when we
were both PhD candidates. In Café Katoen, you showed me your fancy Apple Watch, and
passionately explained your research on healthy apps. That was six years ago, and now you have
almost finished your postdoc, and will become Professor Sax soon. I am glad that your passion did
not fade as time went by: you are still working on the fantastic topic of healthy apps, and the fancy
(or a fancier?) Apple Watch is still wearing on your wrist. You are an intelligent, honest, devoted,
and independent-thinking researcher, and I am sure that it would be a big loss for the law faculty
if they do not give you a tenure (I hope your dean could read these words…). We were interested
in similar topics; we read the same books and articles in our Privacy and Surveillance reading
group; we went to the same conference in Copenhagen, and we applied for a grant together and
will work on that grant project together. I am sure that we will have more cooperation and
exchanges in the future.

For both of you, Gerrit and Marijn, wish you would successfully become big-name professors
in your fields!

212
I also greatly thank those who attended my mock defense and proposed many very helpful
questions and comments. Gerrit and Marijn organized all of this, and some ‘professors’ joined it.
Thank you for all your support: Laurin, Jasmijn, Peyman, Eva (van der Graaf), and Giovanni.

A huge thanks to Eloe! Whenever I got a problem regarding PhD issues, you were often the first
one I turned to ask, and you were always there trying to help me out. ASCA is a big system, but
because of you (and many others like you), I feel that it becomes a warm, accessible, and tight-
knit community. Thanks also to Bashar, who is very kind and patient in helping me extend my
visa for a year, two years, and then three years… I hope I won’t ask you for a fourth-year extension.

I feel fortunate to be a member of the Philosophy and Public Affairs (PPA) capacity group. PPA
is like my second home in the Netherlands, professionally and personally. This robust group has
helped me grow steadily over the last few years. Everyone in the group seems to have your
particular magic, often making me amazed and admired. My dear friends and colleagues, most of
you have moved away or getting jobs, but thank you all that ever be part of my story: Alex,
Marjolein, Jasmijn, Natasha, Bernardo, Peyman, Zhuoqun, Gijs, Xia (Gu), Jana, Henri,
James, Karen, Pieter, Anna, Matthé, Noortje, Eva (Rodriguez), Daniel (de Zeeuw), Jelle,
Tom (Kayzel), Eva (Meijer), Tom (Schoonen), Jan, Nadia, Daphne, Özgür, Mac-Antoine,
Elizabeth, Yonathan, Yorgos.

I was lucky to be in a small reading group since 2017, where I met a number of smart people who
came from different fields and universities, but shared similar interests in the reflection of
emerging technologies. I enjoyed every article and book we read together, and every question and
answer we raise and explore together in harmony. Thank you all for making such an excellent and
friendly platform possible, my dear friends: Beate, Marjolein, Marijn, Gerrit, Rosa, Laurin,
Naomi, Bjorn, Lotte, Nina, Gemma, Wiebke, Claudia, Eva, Sarah, Jiahong, Anne (Holmond).
Hope you all have a bright future!

I also sincerely extend my huge thanks to those who helped me all the way to get my Doctoral
degree. I thank Peter (Kroes) and Maarten (Franssen) so much, who were my two supervisors
when I was a visiting master’s student at the Delft University of Technology. Thank you, Peter.
You are one of the kindest people I have ever met. I hope you’re doing everything well, and
enjoying your retirement in your French vineyard house. I have not contacted you for a long time,
but that is only because I do not want to disturb your tranquil life. I hope we could meet up
somewhere in Delft, Amsterdam, France, or Beijing. Take care! Thank you, Maarten! You are
not only my supervisor, but my very good friend. It’s really amazing to grab a beer with you for
the whole afternoon beside a canal in Amsterdam, and listen to your telling the fascinating history
of different buildings and streets in the past. You are an Amsterdamer, and you know this beautiful
city so well, just like knowing your old friends. Sometimes, I doubt whether you are someone who
comes from the Golden Age! I heard that your ambitious project of collecting all versions of The
Arabian Nights would be completed soon. Cheers! Meanwhile, my profound thanks go to two
professors at Renmin University of China: Jingyang Liu and Yongmou Liu. Jingyang was my
supervisor during my Master’s study, and Yongmou was the one whose innovative idea of
restricted technocracy inspired my research a lot. Thank you both for your kindness and continual
support. Hope to visit you soon in Beijing!

213
Thank you, Chris, my dear English language partner. I wish I could have met you earlier, and I
might have an excellent English with a Los Angeles accent. You are a lawyer and have mastered
many languages, but you are still humble to learn Chinese. I have learnt so much from you about
what a real America looks like, the complicated US that I have never imagined before. Hope that
I can visit you in Irvine some day in the near future. Wish you have a wonderful life with Anthony
and Peanut!

To all my soccer team friends and comrades in Utrecht, a big THANK YOU! Without you, I would
have graduated a few months earlier… Every Friday night, I enjoy so much playing balls with you
guys, and it may be the second most exciting thing for me in Holland (the first one is, surely and
definitely, that staying with my girlfriend J).

Thank you, Yi, the leader and the spirit of our team. You are so professional and passionate about
football, which has influenced me deeply about how to play soccer with not only the passion but
also the brain. But I still need more practice and confidence. Chao (Liu), thanks so much for
practicing football with me every Friday night. Before my baby’s birth, we were often the earliest
birds on the field (because our football skills suck…), and it’s super nice to play together and talk
with you about anything… but football. It is a pity that you have become a very skillful defender;
well, I still have a far way to go. Thank you, Gary, for driving me home from Rotterdam after the
league match. I have never imagined that I’m much younger than you, but that’s only because I
look much older than my age. We often have a lot of common topics to share on the field; so, let’s
keep playing and talking! Chao (Yang), thanks for introducing me to this fantastic team. You play
football with full passion, carrying the ball like a flying bird and running on the field like a
‘perpetual motion machine’. So cool! John (Qiu), thanks for your masterful performance every
Friday night! But what impressed me most is not just your football talent, but your personality:
you are so good at this game, but you are also so humble and modest, willing to teach and share
your skills with some noobs (like me). I’m glad to be your neighbor living in the same building.
And wish you would find your ideal PhD position soon!

Thank you all for playing happy football together over the past four years, my dear teammates,
friends, and comrades: Ryan, Wenbin, Taozi, Jun (Lv), Max, Yang (Li), Gert, Xichen, Di
(Jing), Rui (Zhang), Vincent, Qingyi, Zhicheng, Jie (Du), Zhuzhen, Rui (Zhu), Weilun,
Zhaoyi, Zhe (Sun), Zhiming, Xianghui, Yifang, Jiacheng, Yuhao, Lao Bai, Fish. In my eyes,
you are all football kings! Enjoy the World Cup in Qatar, and meet you all this Friday night!

I also want to thank those who appear in my life, spending some precious time together. Some of
you are my best friends: Yuehui, Lao Wu, Lao Hu, Dong Gua, Can Ye, Shan Ma, Qing Chen,
Xiaoling (Chang), Xin Dai, Carl Mitcham, Daniel Lim, Langdon Winner, Axel Honneth,
Xiaowei Wang, Fei Teng, Yan Teng, Zhanxiong, Enrong, Xue Yu, Ping Yan, Kunru, Ching-
Hung, Dachun Liu, Bolu Wang, Jianbo Ma, Chirag.

Finally, my profound gratitude goes to my family. Thank you, Mom and Dad! 感谢你们的默默付
出,和⽆条件的理解、⽀持。你们总为我考虑,总问我吃得好不好,⽽你们⽣了病都不让我知道,⽣怕我
担⼼。要不是⽼姐告密,我是不知道你们隐藏了多少。昨天是⽼爸六⼗岁⽣⽇,我却不能到场,⼼中隐隐
有些凄凉。看到⽼妈发来的照⽚,⼀桌简单的菜,加上⽼姐⼀家四⼜,虽然简简单单,也是其乐融融。⽼

214
爸坐在那⾥,笑得合不拢嘴,戴的⽣⽇帽⼦有点歪、有点⼩可爱。这⼏年求学海外,错过了太多,看着爸
妈头发又⽩了不少,顿然觉得时间如雪。好久没回家看你们了,⼀场疫情,⼀错就错过了三年。⽼爸已经
六⼗,⽼妈明年也是,不知道还有多少三年可以错过?今年我已三⼗有⼆,古⼈早已建功⽴业,⽽我仍在
异乡漂泊。我经常扯⼤旗,跟你们讲什么国运、形势,你们听得稀⾥糊涂,被我⼀顿忽悠,也认同我还是
先留在国外看看再说。我知道,你们⼼⾥是想让我回国⼯作的,但你们不想给我压⼒。从⼩到⼤,我做了
很多正确的决定,你们觉得⼉⼦读了那么多书,看东西肯定⽐较全、肯定没问题。我感谢你们总能选择相
信我、理解我,但这⼀次我是真的有点看不清了。我如今决定了继续在国外待着,但爸妈,你们在国内,
我真不忍⼼就看着你们,这样渐渐地⽼去,⽽你们的⼉⼦却不在⾝边……我有时候这样想着,就怀疑⾃⼰
做的决定,⼼中总是摇摆。我希望你们保重⾝体,等我在国外混出点名堂来,再好好地陪伴你们。但能等
到那⼀天吗?

Many thanks to my sister. 感谢你⼀直在照顾爸妈,把他们安排了房⼦,放在你⾝边,让我可以安⼼地在


国外求学打拼。每次打电话,你总会告诉我⼀些爸妈的近况,说爸妈如何淘⽓,⽼夫⽼妻了还在秀恩爱,
叫我不要担⼼。我看爸妈⽓⾊不错,⽼姐肯定给了不少钱,让他们吃了不少好东西吧。在家⾥,我们都叫
你⽼板,的确,你很会赚钱,在深圳开了好⼏家公司,有房有车,有⼀双孩⼦,虽然不是⼤富⼤贵,但也
是妥妥的⼈⽣赢家了,你是我们家的骄傲。你有了钱,却从来没有看不起我们这种⼀穷⼆⽩的读书⼈。在
你眼⾥,我总是你的弟弟,⽽你永远是那个从⼩就把好东西让给我,永远保护我的⽼姐。疫情刚爆发的那
⼀阵,你担⼼我在国外没有医疗,便要买⼆⼗⼏万的机票让我跟何柳包机回国。虽然我们决定了留下,但
你的那⼀⽚好⼼,我们⾮常感激。你⾼中毕业就开始⼯作,⽽我半辈⼦都在读书,你不管是从事怎样的⼯
作,都⼀定要执意供我读书,坚决不花爸妈的钱。有了你的钱,我读书的⽇⼦、活得像个少爷,在象⽛塔
⾥⾐⾷⽆忧,但怎能不知你⼀路⾛来、⼯作的⾟苦?你对我的好,我都看在眼⾥、记在⼼中,不管以后你
富贵或者贫穷,你永远都是我的⽼姐,这个⼤千世界、四⽅宇宙之中,唯⼀的⽼姐!⼦翼和靖博也如同我
⾃⼰的孩⼦⼀样。⽼姐,感谢有你!

Lexi, thanks for being my little angel! When you can read and understand the words I am writing,
your dad may have become a professor, or he has left academia forever after struggling for years…
I hope the first situation will take place. But anyway, for me, you are an angel dropped from the
sky. You should have been restless about my thesis: I wrote too slowly, and you could not bear
looking at me and waiting for me anymore. So, you just jumped from heaven and came to the earth,
crying into my face and spurring me to move faster. Indeed, since you came to this world, my
dissertation had moved as fast as a flying rocket. What a miracle! Since you came into my life,
everything has changed, completely and immediately. I became a so-called dad, and you are a small
carbon-copy of me. Every day, I catwalk like a thief in the dark, stealthily and silently, only for
fear of waking you up. Your snoring becomes the world’s most beautiful music, because I can
finally take a break. I am fond of greeting you every day by rubbing our noses together: a big nose
is touching a tiny nose, just like the past connecting to the future. I was told that this greeting is
called an Eskimo kiss – what a cute name! I was always amazed by your smile, your geek, your
laughter, and your eyes looking at some ‘mysterious’ things. I feel sorry that you are living with

215
us in a small room, but it seems you always feel happy and excited about everything around you.
In your tiny little eyes, our small room is your big world, a place for a big adventure.

I thought I could train you well, just like engineers’ training their AI algorithms. But I realize that
the training process is opposite: it is you who is training me how to hug a tiny precious, to sing a
cradle song, to change diapers, to cherish what I have now, to smile like a dumb guy, to become a
real father. Thank you, Lexi!

Liu, my girl, my baby’s mom, I do not know how to appreciate you. I can express a million thanks
to you, but when I start to write and speak some words, they immediately become weak and void,
falling into dust. We have engaged in our relationship for eight years, and this is perhaps the only
single thing that takes longer than doing my PhD. We met on the train, where you sat just right in
front of me. It’s still an open and complex debate about who started to speak to another. Since then,
the train has become a symbol of our little family; even Lexi’s nickname is called “a little train”
(poor baby…). We spent our most romantic years in Beijing, and then moved to Holland to grow
old together. This summer, our naughty girl, Lexi, ran into our life. Lexi has your family name,
because you insisted that your last name is more beautiful in Chinese (it sounds like a river). Yes,
that is true, because what you said is always true J. So, from then on, Lexi has gotten your family
name, and so will our second or third babies... I worried about how to explain to my mom and dad
about this, but when I looked at your happy face, I knew that I had made the right decision.

We are so different, however. You are a biologist studying neurons, while I am a philosopher
talking about nous (so-called intelligence, if you may not know this word). You are super-efficient
in doing almost everything, but in your eyes, it took me so many years to finish a nonsense book.
When you start to work on a task, you are so concentrated that even an exploded bomb may not
disturb you. If you begin to think something, I know I should keep my mouth tightly shut up,
otherwise, there would be a ‘war’… When you attend a conference, you will start to work on the
presentation a few months ahead and get everything well prepared. For you, if something should
be done, it should be done right now, immediately. That is why you could publish many good
articles in high-ranked journals. For me, math is horrible, but you always said that data, numbers,
and formulas are the most beautiful things in the world. Sometimes, I was curious about what your
brain is made up of. We are so different, but your cute face and beautiful smile will always make
me forget all our difference.

The most precious thing we share is that we both know how to communicate with and mutually
understand each other. That may explain why we have been in a love relationship for so many
years. But until now, we have not been engaged or gotten married. Our friends called us avant-
garde. Well, the fact is that we are just too lazy… As you often said, why should we spend so much
time acquiring a proof paper? But I know that you want a fantastic marriage, as you told me before,
you dreamt of a big and romantic wedding. I promise that I will do it for you, and will not make
you disappointed. You are a brave girl, as you dare to date a philosopher. We have been together
for eight years, and I can’t wait to enjoy eighty years more with you in the future. So, Liu, will
you marry me?

216
217
218

You might also like