Algorithms (And The) Everyday: Information, Communication & Society
Algorithms (And The) Everyday: Information, Communication & Society
Michele Willson
To cite this article: Michele Willson (2017) Algorithms (and the) everyday, Information,
Communication & Society, 20:1, 137-150, DOI: 10.1080/1369118X.2016.1200645
Our everyday practices are increasingly mediated through online technologies, entailing
the navigation and also oft-simultaneous creation of large quantities of information and
communication data. The scale and types of activities being undertaken, the data that
are being created and engaged with, and the possibilities for analysis, archiving and distri-
bution are now so extensive that technical constructs are necessarily required as a way to
manage, interpret and distribute these. These constructs include the platforms, the soft-
ware, the codes and the algorithms. This paper explores the place of the algorithm in shap-
ing and engaging with the contemporary everyday.
Algorithms are ubiquitous and pervasive, employed in many ways. For example, the
work of algorithms can be seen in the generation of Twitter Trends or in Twitter follow
recommendations; in Google personalised search results or Facebook newsfeeds; or in
suggested Google map directions, just to note a few. These seemingly banal and mundane
though multiple and often intersecting interactions online are becoming increasingly
prevalent and extensive as more and more of our everyday activities are conducted in
online spaces and through online processes (Figure 1).
CONTACT Michele Willson [email protected] School of Media, Culture and Creative Arts, Curtin University,
GPO Box U1987, Perth, Western Australia 6845, Australia
© 2016 Informa UK Limited, trading as Taylor & Francis Group
138 M. WILLSON
This article considers why algorithms are matters of concern when considering ques-
tions of the everyday. It does this via an exploration of particular instances of algorithmic
sorting and presentation as well as exploring ways these contribute to shaping our every-
day practices and understandings. In doing so, it raises questions about understandings of
agency and power, shifting world views and our complex relationship with technologies.
The everyday
Everyday practices constitute the habitus (Bourdieu, 1997) or background within which
people operate. They are the seemingly mundane or banal, recurrent and multiple activi-
ties and routines that we all engage with and that shape the form and flow of our individual
and social lives in space and time. These activities and routines are replicated in countless
ways by many people on a daily or regular basis. Through this process, practices become
normalised or naturalised, usually enacted with minimal thought and often rendered
invisible or in the background (or at the very least as largely unquestioned). Studies of
the everyday are, therefore, partly concerned with rendering the seemingly invisible visible
and thereby open to critique and the examination of power relations and practices that are
in play. These studies are also concerned with understanding how everyday practices are
also performative – not only being situated in, but also giving shape to the form/s of time
and space.
The shaping of the everyday, at least for de Certeau (1988), is partly the result of the
intersection of social, cultural, political and economic strategies enacted by powerful
systems and actors with the tactics of those who are the consumers or users of these sys-
tems and the ways of operating that are employed as a result. For example,
the street geometrically defined by urban planning is transformed into a space by walkers. In
the same way, an act of reading is the space produced by the practice of a particular place: a
written text, i.e. a place constituted by a system of signs. (de Certeau, 1988, p. 117)
If we extend these ideas to understanding the online as being spatialised and navigable,
those who design and construct, navigate and use define the online (as well as its intersec-
tion or melding with the offline).
The Internet is a series of systems within which many people navigate and, therefore,
must devise ‘ways of operating or doing’ the everyday (de Certeau, 1988, p. xi). Indeed, in
many societies, Internet connectivity and associated digital literacies are increasingly
necessary for the enactment of activities and functions that could be readily classed as
everyday practices. By extension, then, we should consider the ways the online systems
and practices – of searching, communicating, purchasing or other activities – enabled
by code, software and algorithms work to constitute and enact the everyday: an everyday
that straddles the on- and offline (the awkwardness of simply trying to demarcate these as
‘different’ spaces is evident in this phrasing, pointing to the intertwining and interrelation-
ship of both spaces already perceived as commonplace). This translation of everyday
activities (through delegation) into actions performed in online spaces is enabled by a
range of strategies that include the use of algorithms to sort, manipulate, analyse and
predict.
One of the things that render algorithms online interesting in any discussion of the
everyday is the ways they can operate semi-autonomously, without the need for inter-
action with, or knowledge of, human users or operators. Latour (1998) would refer to
this as delegation. An algorithm is delegated a task or process and the way it is instantiated
and engaged with in turn impacts upon those things, people and processes that it interacts
with – with varying consequences.
Latour uses the example of a door groom (or door closer) to illustrate the ways the del-
egation of an activity or process to a technology (or nonhuman) has broad and differing
outcomes and responses according to design and interactions. The automatic door groom
does not require a salary or to have working hours allotted; however, the way in which it is
designed means that there are influences upon particular human interactions with the
door and the passing-through-the-door process. For example, Latour (1988, p. 302)
notes that the design of the particular automatic or technical version of his door groom
means that ‘neither my little nephews nor my grand-mother could get in unaided because
our groom needed the force of an able-bodied person to accumulate enough energy to
close the door’. The assumptions built into the door groom design along with its
implementation presume a certain type and capacity of person who will be using the
door. The replacement of a human door groom with an automatic closer also impacts
upon labour relations and opportunities through changing the types of work available,
operating hours and so forth.
Similarly, the delegation of an everyday practice such as the searching for and evalu-
ation of information by a human agent/actor to an algorithmic equation/process also
assumes certain parameters or values. In developed nations with high end computing
capacities, the delegation of a range of everyday practices to technologies, and to
140 M. WILLSON
By extension, the delegation and management of everyday practices within online places
by powerful actors (such as Google) demonstrate the application of complex strategies of
control and manipulation of users and data. However, it is also important to note that de
Certeau did not see those upon whom such strategies were directed as passive or lacking in
power. He refers to the use of tactics by such targets – users and consumers, for example –
to describe the ways that the seemingly less powerful appropriate (poach) and subvert
through their ways of doing or making things happen. It is the intersection of strategies
and tactics that lends shape to everyday life.
Algorithms
Many of the algorithms we encounter daily are proprietary owned – and thus opaque and
inaccessible to outside critique; and their parameters, intent and assumptions indiscern-
ible. And yet the working of algorithms has wide-ranging consequences for the shape
and direction of our everyday. As researchers are increasingly able to demonstrate, the
ways algorithms are designed and implemented (and their resultant outcomes) help to
influence the ways we conduct our friendships (Bucher, 2013), shape our identities
(Cheney-Lippold, 2011) and navigate our lives more generally (Beer, 2009).
Tarleton Gillespie notes, ‘in the broadest sense, they [algorithms] are encoded pro-
cedures for transforming input data into a desired output, based on specified calculations’
(2014, p. 167). Algorithms make things happen – they are designed to be executed and to
bring about particular outcomes according to certain desires, needs and possibilities. In
online spaces, algorithms are central to the ways communication and information (includ-
ing the relational) are located, retrieved, filtered, presented and/or prevented.
To describe algorithmic processes, the analogy of a recipe is often employed. A recipe
has a particular identified end point – a meal, or a cake, for example. What the recipe pro-
vides is a list of ingredients (contributing items or variables), but more importantly, it also
contains a step-by-step description of a process that outlines what needs to happen and
when, what needs to be combined or separated and when, following a very specific,
detailed order. It also needs to be written in a way that the user will understand and be
able to follow. Similarly, an algorithm considers particular variables or items that need
INFORMATION, COMMUNICATION & SOCIETY 141
Their point is well made; algorithms cannot be adequately studied as stand-alone pro-
cesses if we are to start understanding the roles they now play. But they are also more
than technical infrastructures – algorithms also need to be recognised more broadly as
both situated artefacts and generative processes that engage in complex ways with their
surrounding ecosystems. This is an ecosystem that involves technical – software, code,
platforms and infrastructure – and human designs, intents, audiences and uses more
broadly.
In many arenas (amongst public, corporate and increasingly academic entities), there is
a tendency to see online-generated data and algorithmic output as being a simple resource
able to guide consumer behaviour, encourage particular choices and change the ways we
live and see our lives. Yet as those who critique this understanding of data as able to simply
and cleanly collect and analyse note, there is no such thing as ‘raw data’ (Gitelman, 2011).
There is also no such thing as a raw algorithm in terms of it being seen as an uncompli-
cated and objective instruction. Algorithms are embedded in complex amalgams of politi-
cal, technical, cultural and social interactions. As Roberge and Melançon (2015, p. 3) note,
‘They [algorithms] construct meanings as much as they are shaped by meanings and thus
exhibit a form of double agency’. As a result, algorithms help to bring about particular
ways of seeing the world, reproduce stereotypes, reify practices (Postigo, 2014) and
world views, restrict choices or open possibilities previously unidentified.
When we talk about algorithms and the delegation of tasks and processes to them, we
therefore need to take into account the ways their designs and their actions interact with
their human counterparts, their relations, systems and structures (social, technical, cul-
tural and political). We also need to consider who designs and implements them and
what intended and unintended outcomes result. The question of who designs and
implements them is often intimately bound up with questions of power and control, par-
ticularly (though not exclusively) in the case of large corporate online entities such as
Google.
extensive engagements through online systems and human and technical users, Google
has access to ever-increasing quantities of data drawn from across these platforms, along-
side the technical ability to aggregate, combine, manipulate and process these in ways that
are not open to those outside of its corporate system: granting them considerable power to
shape lives and outcomes as a consequence.
In its efforts to maximise its profits and thereby satisfy its shareholders, Google needs to
meet a number of objectives: ensuring viable and attractive consumer products (e.g.
Search, Maps, Gmail, YouTube, Android and Google Play), facilitating and providing
compelling industry products (data analytics, targeted advertising and platform for app
purchase and distribution) and in the process generating growth and income. This
means the plethora of algorithms being designed and enacted similarly are required to
meet differing needs and functions, sometimes simultaneously. These requirements
necessitate the incorporation of an iterative process of feedback and change to accommo-
date the shifting environments upon which the algorithms operate.
Google’s chief economist Hal Varian has publicly noted that Google runs continual
experiments on its users to try to identify causality and consequence: what might drive
changes in user behaviour, changes in the way in which information is found and dis-
played, the list is endless. The online environment is a perfect test site for trying new
things, monitoring what happens and playing with variations. In a paper given at the
National Association for Business Economics annual meeting in 2013, Varian wrote,
Experiments. … . This is pretty easy to do on the web. You can assign treatment and control
groups based on traffic, cookies, usernames, geographic areas, and so on.
Google runs about 10,000 experiments a year in search and ads. There are about 1,000 run-
ning at any one time, and when you access Google you are in dozens of experiments. (Varian,
2013, p. 27)
Google’s Panda, Penguin and Hummingbird algorithms and recurrent updates are other
examples where changes are made in order to encourage some outcomes, shift priorities:
technical and social. These possibilities to shape, direct and reflect outcomes and behav-
iour on the basis of algorithmic sorting of large data sets gleaned from everyday activities
alongside the ability to test or experiment with these and to be able to track and identify
resultant changes place enormous power in the hands of organisations such as Google,
whose products are fundamentally entwined in the everyday lives of its users.
However, these possibilities also raise questions as to how we can conceptualise the del-
egated agency of the algorithms themselves and as to how we can analyse and position
their place in the everyday. While the strategies of corporations such as Google work to
shape the environment and practices of its users, for example, the tactics users employ
when engaging in these practices and in these spaces intersect and iteratively shape the
ways in which the everyday is manifest and experienced. Relationships and understand-
ings of technology are inferred: These relationships are complex with many descriptions
connoting an interpersonal or anthropomorphic framing of everyday algorithmic output.
This is evident when exploring the language used by technology researchers. For example,
in the following quotation, despite the use of scare quotes around terms indicating a cer-
tain appropriation of language and actions being applied to technological process, the
choice to include such language is interesting, indicating the complex relationship between
human and algorithm and also the lack of an appropriate language to describe these com-
plex processes.
Search engines ‘learn’ about our preferences and desires as they endlessly concatenate infor-
mation about the potential quests of searchers. As algorithms come to ‘know’ more about our
search activities, search and targeted advertisements become more effective, which leads to
better understanding of searchers’ supposedly ‘inner’ selves, and so on in a recursive circle
of adaptation and modulation driven by the algorithms as much as searchers and their
desires. (Hillis, Petit, & Jarrett, 2013, p. 16)
Increasing everyday dependence on and engagement with and through the online, and
extending out to our engagement with other objects (Internet of things, driverless cars or
with robots, e.g.) render these relationships and the ‘algorithmisation’ of everyday practices
as commonplace and unremarkable and yet relatedly, worthy of closer critical attention.
The extract above points to the increasing reliance and almost interpersonal engage-
ment that we are beginning to have with our devices and our desire to delegate many
of our everyday needs, decisions and actions into a synergistic melding of human and
machine. Individual user responses to a technology anticipating their movements might
144 M. WILLSON
invoke mixed feelings: for some users such as Hal Varian, this is uncritically accepted as a
helpful and obvious relationship, whereas others, as Varian notes, may feel uncomfortable
with the obvious surveillance and data monitoring of everyday routines that this inter-
action infers. Yet, this continual expansion in delegation and prediction is likely to become
increasingly normalised and even expected. The increase in the number of personal assist-
ants such as Apple’s Siri or Microsoft’s Cortana, and the continual extension of ‘their’ per-
sonal assistant capabilities, encapsulates this desire within its very expression and intent.
The attempts to render the interface between the user, the device and the software and
algorithms, and the activity itself seamless and ubiquitous, also work to obscure the
range of powerful strategies and interests involved.
These potentials accentuate the drive to design ever more sophisticated and extensive
algorithmic processes that can analyse, manipulate and predict on the basis of broader and
more complex data sets enabled by the relocation of everyday practices online. Yet hidden
away behind the screen and from the everyday user’s ability to unpack and understand the
specific practices, programming choices of these algorithms (even more difficult with
machine learning possibilities and multiple algorithm interactions), what they do and
do not do, are less transparent or able to be critiqued. Unintended and unanticipated con-
sequences are an obvious, and will be an increasingly common, outcome. It is, therefore,
the outcomes that signal issues or problems that need to be addressed and that we need to
be alert to. These issues may be a result of internal programming or interactions, but they
can also be a result of the algorithmic outcomes and interactions with other social systems
and practices – as a result of their engagement with people.
Recommender systems, for example, employ algorithms to identify and present rec-
ommendations on the basis of a user’s preferences and online practices. The Amazon web-
site will appear to ‘recommend’ particular purchases according to the assumption of
shared associative interests – if you are interested in a book on algorithmic processes,
then it is likely there might be other relevant books that you might be interested in around
this topic. This assumption is translated into the system then presenting to you, the algor-
ithm interested user, the search or purchase results of other people who also searched or
purchased that book along with other books that they have purchased, books that possibly
have similar search terms in their metadata, on the assumption that a shared interest will
result in an increased likelihood of purchase. This results in not only varyingly useful, but
also at times strange or unanticipated associations.
McKelvey (2014, p. 589) points to the ways a recommender system’s inference of a
relationship or association between users and users’ interests can be translated into trou-
bling or unexpected connotations more broadly. He recounts an article by Mike Ananny
that noted an associative relationship inferred on the basis of recommendation and place-
ment on Google Play Store between Grindr (self-described as a gay guy finder) and a sex
offender search app. The title and byline of Ananny’s (2011) article makes the connections
and consequences clear: ‘The Curious Connection between Apps for Gay Men and Sex
Offenders: Reckless associations can do very real harm when they appear in supposedly
neutral environments like online stores’. By the very coupling of these two apps on an
online store website – linked by the tag, related and relevant applications – an association
is made raising questions about the algorithms and programming that are being used in
order to generate such results, but also highlighting the possibilities for human associative
interpretations of these recommendations that are problematic socially, ethically and
INFORMATION, COMMUNICATION & SOCIETY 145
politically. The algorithm/s involved would not distinguish between the data they have
been instructed to analyse and manipulate in terms of its political or social values expli-
citly. Yet the outcomes point to the lack of nuance or contextual understanding that
such processes often do not accommodate, and the impact they may have when being
addressed to complex, socially embedded human users and systems. They are also not
open to outside scrutiny that might enable underpinning assumptions to be interrogated:
it is only their products or outputs that can be addressed. In this instance, a direct infer-
ence could be drawn between sex offenders and gay men.
Bias
This combination of delegated everyday practices and algorithmic functions within social,
cultural and political systems inescapably results in biases being enacted. In 1996, Fried-
man and Nissenbaum categorised the range of biases evident in computer systems at that
time.
These biases derived from a range of inputs, social, technical and emergent, as time,
technology and social influences impacted on outcomes. Bias could be beneficial or detri-
mental or both, similarly they could be intentional or unintentional. Friedman and Nis-
senbaum use the example of travel airline operator listings as demonstrating bias. In
their example, they noted the way in which airlines operators were listed on a screen,
whether ordered by alphabet or by preferred supplier, had an impact on which airlines
were used or referred to by the travel agents. As agents were more likely to refer to
those operators listed on the top page, this meant that decisions as to ordering criteria ben-
efitted some airlines and disadvantaged others. This is not a dissimilar practice and out-
come to the one that has been noted with Google search results and the fact that users
rarely refer to listings after the first or second page of search results. More recently, in
relation to algorithms and social media, Bozdag (2013) noted how Facebook prioritises
popularity of posts or particular types of interactions in its ordering and inclusion of
items in a user’s newsfeed, and the ways Twitter prioritises currency as an important rank-
ing value: these choices and actions demonstrate particular biases and in the process, par-
ticular practices of power.
Friedman and Nissenbaums’ (1996) categories of bias as well as their origins are listed
below.
This delineation of different types of bias reveals the multitude of factors that come into
play when technical, social, cultural and political elements are engaged. Bias is generated
and enacted through the ways the technology is designed, the ways that the data are
encoded into various relationships and actions, and the ways people and broader society
engage with one another and with the various systems that they have put in place in order
to navigate and control their lives. Twitter Trends analysis, for example, highlights the
technical and emergent biases embedded in algorithms that value particular types of
activity and push these to the forefront. Gillespie (2011) notes the process by which a Twit-
ter topic is prioritised up the list according to the value given to certain criteria, and also
the possible human/social bias displayed in the categorising of content in Amazon filtering
processes. As Mackenzie (2006, p. 44) notes, ‘[an algorithm] naturalizes certain orders and
animates certain movements. An algorithm naturalizes who does what to whom by sub-
suming existing patterns and orderings of cognition, communication and movement’.
The challenge for many researchers who are trying to understand the role of algorithms
and the ways they intersect with and influence the everyday is trying to understand the
origin of such bias when the origin, outcomes, instructions and implementation of
many algorithms are not open to scrutiny and are multidirectional. This is not only partly
due to the proprietory nature of many algorithms, their multiplicity and complexity, their
embeddedness in many online processes and the mundane nature of much of what they
do, but also by the requirement for technical knowledge or literacy that many people lack
when dealing with often complex mathematical and technical systems.
various routes to travel depending upon what the algorithm determines is the quietest, the
most peaceful or the most beautiful route (without sacrificing too much time).
The goal of the travel algorithm was to ‘suggest routes that are not only short but also
emotionally pleasant’ (Quercia, Schifanella, & Aiello, 2014, p. 1). According to the
researchers,
To date, there has not been any work that considers people’s emotional perceptions of urban
spaces when recommending routes to them. We thus set out to do such a work by collecting
reliable perceptions of urban scenes, incorporating them into algorithmic solutions, and
quantitatively and qualitatively evaluating those solutions.
When the development process is investigated further, the interplay of human emotions,
decisions and inputs alongside the sorting, analytical and manipulative capacity of the
technologies becomes more complex and reveals an iterative, multilayered exchange
that takes place between human process and technical process. In the travel algorithm,
input data were drawn from combining the analysis of data from a crowd-sourcing plat-
form that shows two street scenes in London, and inputted user votes selecting which
scene looked more beautiful, quiet or happy. This information was then analysed to
form a model of the characteristics of beautiful, quiet and happy travel.
Then, to test for generalisability, the developers investigated whether their algorithm
model would also work to predict beauty scores on Flickr by first analysing Flickr user
votes, and then holding interviews directly with users about the choices that were
suggested to validate their findings. As a result of this analysis, the developers of the algor-
ithm claim that their algorithm can suggest, with reasonably accurate results, a number of
possible travel routes able to meet the traveller’s requirements in order to make a journey
more pleasurable, while still accommodating the need for time considerations.
While it is not clear whether this algorithmic model was applied or developed further, it
is clear that it could be monetised and offered within services such as Yahoo maps (it was
developed in conjunction with Yahoo) or other GPS-based services. Zuboff (2015, 2016)
makes this link quite explicit in her description of the rise of surveillance capitalism and
the capture and commodification of users’ behavioural data for persuasive and predictive
application.
their environment and their relations (when all is reducible to malleable discrete but
combinable units).
An illustration of this reduction and increasing fragmentation of every thing including
the everyday can be seen in the growth of the quantified self and self-tracking movement
and the ways various aspects of human activity and individual biological and health data
are identified, tracked, captured by wearable technologies and then analysed and relayed
back to the individual, the provider of the various monitoring service providers, and other
interested parties (such as health services). These reductive manoeuvres are necessary to
be able to capture and manipulate biological items and actions into data and processes;
yet the very act of translation is also transformative: the process from biology (heart)
and practice (walking) to data becomes unquestioned, normalised and invisible.
This reduction to singular data or units is, however, only one aspect of the process. The
use of algorithms requires this reduction but then introduces, defines and creates relations
between these varying units through the processes they are instructed or designed to
implement. Therefore, algorithms are also incredibly relational – it is the relation that
defines, describes and shapes how that data are then (re)presented. These relations are
defined and designed by the architects of the algorithm according to a design brief, a par-
ticular desire or identified output, and shaped by technical specificity, commercial incen-
tive and social predispositions, bias and cultural understandings.
The delegation (and resultant commodification that often results) of everyday practices
such as information search and analysis functions or of communication between friends to
algorithmic processes and software coding forms part of the fabric of the everyday (or
habitus). Here, the strategies of actors (such as Facebook) to enframe, shape and capture
user practices and desires intersect and engage with the tactics of users in making oper-
ational their everyday.
Conclusion
Algorithms invoke questions about how to conceptualise issues such as agency and power
within a technologised everyday. Algorithms are dynamic processes designed and
implemented by humans in conjunction with technical affordances and within broader
political, social and cultural environments that are shaped by the continual interactions
of strategies, structures and tactics. As a result, they are in a constant flux – think about
the continual updates that we see with Google search algorithms, for example – with
changes made by humans and by machines. We cannot ignore their actions, or fail to
deal with their consequences. Science and Technology Studies, software studies and
actor network theory all provide some fruitful insights and methods particularly in
relation to the specificity of particular algorithms, yet largely fail to address many of the
broader issues and questions of the everyday that are raised.
Some of the questions raised in this article relate to the relationship we have with our
machines and the broader inability to critique and guide many of the algorithmic pro-
cesses enacted by large corporations with which we increasingly interact. Another relates
to something that Zuboff refers to with her term surveillance capitalism, though there is
more involved in this process than increased commodification alone. As de Certeau
made clear, consumers and users are not passive – they work within the systems in
which they find themselves and appropriate, and subvert through various tactics of
INFORMATION, COMMUNICATION & SOCIETY 149
making do (something that Zuboff fails to acknowledge). Yet this still does not completely
encapsulate what I am pointing to.
I suggest that by delegating everyday practices to technological processes, with the
resultant need to break down and reduce complex actions into a series of steps and
data decision points, algorithms epitomise and encapsulate a growing tendency towards
atomisation and fragmentation that resonates more broadly with an increasing emphasis
on singularity, quantification and classification evident in the everyday. Algorithms, and
the delegation of human behaviour to algorithmic processes, also become part of the (tech-
nologised) everyday as a result. What this might mean for broader social and ethical
relations with one another, our technologies and our understanding of the everyday itself
is a question worthy of further consideration.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributor
Michele Willson is Associate Professor in Internet Studies, Curtin University, Australia. Her
research interests converge around understanding our various relationships with and through tech-
nology. Her publications include Social, Casual and Mobile Games (Bloomsbury Academic), A New
Theory of Information and the Internet (Peter Lang), and Technically Together (Peter Lang). [email:
[email protected]].
ORCiD
Michele Willson http://orcid.org/0000-0002-3703-3020
References
Ananny, M. (2011, April). The curious connection between apps for gay men and sex offenders. The
Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2011/04/the-
curiousconnection-between-apps-for-gay-men-and-sex-offenders/237340/
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological
unconscious. New Media & Society, 11 (6), 985–1002.
Bourdieu, P. (1997). Outline of a theory of practice (R. Nice, Trans.). Cambridge, NY: Cambridge
University Press.
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information
Technology, 15, 209–227.
Bucher, T. (2013). The friendship assemblage: Investigating programmed sociality on Facebook.
Television & New Media, 14(6), 479–493.
Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of con-
trol. Theory, Culture & Society, 28(6), 164–181.
Constantiou, I. D., & Kallinikos, J. (2015). New games, new rules: Big data and the changing context
of strategy. Journal of Information Technology, 30, 44–57.
de Certeau, M. (1988). The Practice of Everyday Life (S. Rendall, Trans.). Berkeley: University of
California Press.
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on
Information Systems, 14 (3), 330–347.
150 M. WILLSON
Gillespie, T. (2011, October). Can an algorithm be wrong? Twitter trends, the spectre of censorship,
and our faith in the algorithms around us. Culture Digitally. Retrieved from http://
culturedigitally.org/2011/10/can-an-algorithm-be-wrong/
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot
(Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–93).
Cambridge, MA: The MIT Press.
Gitelman, L. (Ed.). (2011). Raw data is an oxymoron. Cambridge, MA: The MIT Press.
Hillis, K., Petit, M., & Jarrett, K. (2013). Google and the culture of search. New York: Routledge.
Latour, B. [J. Johnson] (1998). Mixing humans and nonhumans together: The sociology of a door-
closer. Social Problems, 35(3), 298–310.
Mackenzie, A. (2006). Cutting code: Software and sociality. Digital Formations Series, Vol. 30.
New York: Peter Lang.
McKelvey, F. (2014). Algorithmic media needs democratic methods: Why publics matter. Canadian
Journal of Communication, 39, 597–613.
Postigo, H. (2014, April). Capture, fixation and conversation: How the matrix has you and will sell
you, part 3/3. Culture Digitally, [blog post]. Retrieved from http://culturedigitally.org/2014/04/
capture-fixation-and-conversation-how-the-matrix-has-you-and-will-sell-you-part-33/#sthash.
6sGBTcmy.mXWelSyy.dpuf
Quercia, D., Schifanella, R., & Aiello, L. M. (2014, September 1–4). The shortest path to happiness:
Recommending beautiful, quiet, and happy routes in the city. HT’14, Santiago, Chile. Retrieved
from http://researchswinger.org/publications/quercia14_shortest.pdf
Roberge, J., & Melançon, L. (2015, July 1–19). Being the King Kong of algorithmic culture is a tough
job after all: Google’s regimes of justification and the meanings of Glass. Convergence: The
International Journal of Research into New Media Technologies. Published online before print.
doi:10.1177/1354856515592506
Varian, H. R. (2013, September 10). Beyond big data. Presented at the NABE annual meeting, San
Franciso, CA. Retrieved from http://people.ischool.berkeley.edu/~hal/Papers/2013/
BeyondBigDataPaperFINAL.pdf
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization.
Journal of Information Technology, 30, 75–89.
Zuboff, S. (2016, March). Google as fortune teller: The secrets of surveillance capitalism.
Frankfurter Allgemeine, Feuilleton. Retrieved from http://www.faz.net/aktuell/feuilleton/
debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html