Veiling Identities

https://www.neme.org/projects/veiling-identities

Resisting recognition

The pseudo sciences of anthropometric unique identifiers (AUI), commonly known as biometric identifiers, are deeply rooted in the 18th and 19th century eugenics movement, physiognomy, and later phrenology which were developed to racialise humanity, maintain slavery, and subjugate people in the colonies, or in prison. Physiognomy falsely assumed that the person’s character or personality could be derived from their outer appearance—especially the face. Phrenology considered that the shape of the skull was a reflection of the brain and that it could be studied to predict behaviours such as an inclination towards marriage, poetry and kindness, or also theft and crime. The eugenics movement was a set of beliefs and practices that aim to “improve” the genetic quality of a human population. All advocated selective breeding to “enhance” the human race. These beliefs were eventually adopted by the Nazis and resulted in the holocaust but also in the many racist issues we are plagued with today. Other techniques currently used for identification are iris recognition, facial recognition, finger-printing, and DNA. It is interesting to note that none of the anthropometric unique identifier techniques have been scientifically proven that they actually produce unique results.

The case of facial recognition is discussed in the documentary Coded Bias (7th Empire Media, 2020) in which MIT media researcher Joy Buolamwini exposed to a wider audience how facial recognition systems failed to recognise her face. At the time of Coded Bias, facial recognition, being an AI technology, was known to be corrupted by biases and badly trained datasets.

Soon after the release of the documentary, two major, but questionable trends hit the digital sphere. The NFT craze, and the Artificial Intelligence (AI) on and off-line applications. Both technologies were of course pre-existing. The first artist’s NFT was created back in 2004 by Kevin McCoy, a New York based artist, whereas, research on AI dates as far back as the 1930s and artists experimented with it since the 1960s. In addition a few years later saw the increase of processing power, the development of neural networks, and the adaptation of the technology in all aspects of our lives including, passing through customs, unlocking our mobile phones or even paying for our food. Regardless of the unproven science, anthropometric unique identifiers are now prevalent, not only in the digital domain, but also in our daily lives.

Zuboff analysed in a number of articles the societal implications of a mutation of capitalism that she distinguished that it relies on “surveillance assets,” “surveillance capital,” and “surveillance capitalism” all of which depend on a global architecture of computer mediation that she calls “Big Other,” 1Shoshana Zuboff. "Big other: surveillance capitalism and the prospects of an information civilization." Journal of Information Technology, 2015. a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such as freedom, democracy, and privacy.

AUI are attracting significant attention in recent years, due to their rapid adoption in various sectors, including law enforcement, retail, and personal devices. On the one hand, proponents argue that they improve security and serve practical purposes, whilst on the other, critics expose serious ethical issues and technical biases that raise concerns about the implications of their ubiquitous use. This project will address some of the issues pertaining AUI, including biases in algorithms, privacy concerns, potential misuse, and present artists who developed ways to bypass these technologies.

Many academic studies have demonstrated that AUI systems often exhibit significant disparities in accuracy especially on demographic factors such as race, gender, and age 2Raji., & J. Buolamwini. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. whereas a 2018 study found that commercial AUI systems were 34.7% more likely to misidentify black women compared to white men. 3J. Buolamwini. & T. Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 2018 Conference on Neural Information Processing Systems (NeurIPS). These disparities proved to have serious consequences, particularly in law enforcement where AUI technologies are utilised for identifying suspects. AUI currently perpetuates existing societal biases but also leads to wrongful arrests and further marginalisation of already disadvantaged communities. 4C. Garvie, Hamm, M., & A. Henry. Equitable Facial Recognition: A Study of the Policing and Surveillance Implications of Facial Recognition Technology. Center on Privacy & Technology, Georgetown Law, 2016.

The technical limitations of AUI contribute to these biases as many such systems rely on training data that is predominantly composed of images of white people, leading to algorithms that perform very badly when requested with identifying individuals from underrepresented groups. 5J. Dastin. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, 2018. Datasets biases, implemented by a lack of diversity by their programmers, raise concerns about the overall reliability of AUI, and particularly facial recognition technologies as without extensive and diverse datasets, the algorithms cannot accurately generalise across varied populations, resulting in increased error rates and inequitable treatment. 6A. Howard. (2020). "The Limits of Facial Recognition: Assessing the State of the Technology." AI & Society, 2020.

Privacy is another major concern related to the ubiquitous use of AUI technologies as the potential for mass surveillance and the erosion of civil liberties are becoming actual threats in our societies. The introduction of ‘smart’ cameras and microphones (both by the authorities but also in consumer goods such as mobile phones) is making AUI more invasive, not only in public, but also in our private spaces, and according to Zuboff, individuals will no longer feel free to express themselves without fear of being tracked and monitored. 7S. Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019. According to Regan, the collection of biometric data raises fundamental questions about consent, but also the extent to which people understand how their data is being used. 8P. M. Regan. Legislating Privacy: Technology, Social Media, and the Law. Springer, 2015. More times than not, individuals are not adequately informed when their images are captured and analysed, suggesting a lack of transparency that contradicts fundamental principles of privacy rights (2015). As such, Crawford, one of the many critically minded researchers convincingly argued that without clear regulations governing data collection and usage, AUI represents a significant threat to our autonomy and privacy. 9K. Crawford. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

The world has yet to place mutually agreed regulations on the use of AUI technologies and this exacerbates the inherent risks associated with their use. As various companies and law enforcement agencies deploy AUI systems independently, the lack of a cohesive framework for governance is already leading to misuse and abuse of the technologies. The variability in regulations across countries and jurisdictions increases the potential for unethical deployment, as well as disparities in public trust and accountability, 10R. Kitchin. The Data Revolution: A Critical Analysis of Big Data. SAGE Publications, 2020.
K. Lum, & W. Isaac. To Predict and Serve? Significance, 13(5), 14-19, 2016.
In the EU, AUI use is In conflict with the General Data Protection Regulations (GDPR). While some US cities, (ie. San Francisco and Boston), have implemented bans on the use of AUI by police, such measures are not universally adopted and often face considerable pushback 11Kate Conger, Richard Fausset, Serge F. Kovaleski. "San Francisco Bans Facial Recognition Technology." NY Times, 14 May 2019. nytimes.com/2019/05/… o.html that create a fragmented regulatory landscape resulting in uncertainty that inhibits progress toward more ethical practices in AUI utilisation.

The potential for misuse or abuse of AUI technologies, especially in combination with the introduction and proliferation of deep fakes, presents significant challenges for society. State, para-state, and criminal actors worldwide may exploit AUI for repression or extortion. According to Lum & Isaac, AUI enables the surveillance of dissenting voices and the targeting of individuals based on their political beliefs (2016) and in authoritarian states, according to Gitelman, the technology can become a tool of oppression rather than a tool of enhancing public safety or security as widespread surveillance on free speech and protest activities poses a fundamental threat to democratic values and human rights. 12L. Gitelman. Raw Data Is an Oxymoron. MIT Press, 2013.

Veiling Identities: Resisting Recognition is a project focusing on the critical techniques used by artists fighting against the digitally powered panopticon built by state and/or corporate actors. It aims to investigate how artists ethically weaponise their knowledge and imaginations to fight against potentially repressive consumer technologies.

Yiannis Colakides

Seminar

In these presentations, we will investigate the intricate relationship of imaging, identity, and scientific representation via an artistic and critical perspective. Sophie Kahn will analyse her own work involving 3D laser scanning, tracing back her manipulations in order to show how her manipulations highlight the fragility, loss, and memorial aspects of imaging as a representation of truth, but offering instead something more fractured and partial. Aisling Phelan’s presentation will provide an overview of her own artistic practice, but specifically addressing her use of AI and digital imaging, especially via the perspective of her Veiling Identities presentation. Her own use of DeepFakes allows her to question surveillance culture today, touching aspects of agency, embodiment, and trust in technological systems, combining science fiction and scientific critique. Paul Vanouse’s presentation will interrogate the scientific and historical construction of concepts of difference and sameness via DNA as a medium, challenging scientific determinism by emphasising instead the universal humanity that lies beyond these constructs.

Sophie Kahn: Incomplete Vision

Sophie Kahn will discuss her work with 3D laser scanners in relation to other imaging-based surveillance technologies such as LiDAR and volumetric facial recognition. She originally trained as a photographer, and her practice draws on a long history of photographic theory concerned with loss, absence, and the fundamental impossibility of freezing time or capturing truth in an image – an impossibility that persists whether the image is a family photograph or a LiDAR scan used in drone warfare. The critique is oblique and poetic rather than overt, and stems from my deliberate misuse of precisely engineered imaging tools that promise truth and all-seeing accuracy. By bringing opposing elements into contact, she creates friction that causes these systems to behave in unpredictable ways. The resulting imagery is always fragmented and partial. Rather than smoothing or correcting scan errors, I allow them to remain, reconstructing damaged data to create imagery that feels memorial or even funereal.

Aisling Phelan: Speculative Perspectives on Identity, Embodiment and Technological Trust

Aisling Phelan will provide an overview of her artistic practice, tracing the research, processes, and technological explorations that shape her work in relation to identity, corporeality, and digital representation.

Focusing on her contribution to the Veiling Identities exhibition, Your Face Belongs To Us, Phelan will explore how AI, specifically DeepFake technologies, can be used to critically examine contemporary surveillance. In doing so, the presentation will address the speculative fiction and cryopreservation narratives embedded in Your Face Belongs To Us, connecting these to broader questions about agency, embodiment, and our trust in technology.

Aisling Phelan is an Irish multidisciplinary artist exploring digital doubles, speculative futures and human machine interactions and entanglements. Through 3D animation, AI, video, sculpture, and live interactive technologies, her work explores what it means to be human in an era of rapid technological advancement and pervasive algorithmic influence.

Drawing from a transhumanist and speculative fiction perspective, Phelan explores how far we are willing to go in the pursuit of self-optimisation and the potential costs of such advances. Fusing the intimate with the artificial, her practice confronts the seductive promise of transcendence and enhancement, creating space for reflection on the role of current digital infrastructures in shaping how we understand ourselves and others.

Paul Vanouse: Difference and Sameness

“To differentiate” has come to mean more than just separating two things, it has become a virtue synonymous with understanding itself. This valorisation was central to the race and colonial projects that fuelled European modernisation, helping to create a system of oppositions—insides and outsides, selves and others, rights and rules. The techno-sciences from anthropometry to fingerprinting to contemporary genetic testing, have been so employed—to expose and nominate minute human differences that might be hidden by “false” resemblances.

For 25 years, Paul Vanouse has utilised DNA as a plastic medium of representation to undermine biologically determinist metaphors of DNA, plentifully incorporated in the sciences, the law, and the broader culture. This work deconstructs the notion of DNA as one’s identity, essence or destiny as well as the underlying assumptions of its discriminatory power, alternatively positing humankind’s “radical sameness.” This talk surveys several of these projects and the techno-scientific histories in which they intervene.

Artists

Francis Hunger, Sophie Kahn, Aisling Phelan, Paul Vanouse

Artists’ bios

Francis Hunger’s practice combines artistic research and media theory with the capabilities of narration through installations, radio plays and performances and internet-based art. Currently he co-teaches at the Emergent Digital Media class at the Academy of Visual Arts Munich, and is co-editor of carrier-bag.net. He is PostDoc at the Dataunion ERC project, researching about the European Union database interoperability initiative at VUB, Brussels. 2021–2023 he was researcher for the project Training The Archive at Hartware MedienKunstVerein Dortmund, critically examining the use of AI, statistics and pattern recognition for art and curating. He authored a.o. Unhype Artificial ‘Intelligence’! A proposal to replace the deceiving terminology of AI (2023) and How to Hack Artificial Intelligence (2019). His Ph.D. at Bauhaus University Weimar developed a media archeological genealogy of database technology and practices, a book is in preparation. In 2022/23 Hunger was guest professor at the Intermedia program of the Hungarian Academy for Visual Arts, Budapest. In 2022 he co-curated with Inke Arns and Marie Lechner the exhibition House of Mirrors – Artificial Intelligence as Phantasm at HMKV, Dortmund.

Sophie Kahn is a digital artist and sculptor, whose work addresses technology’s failure to capture the unstable human body.
She grew up in Melbourne, Australia, and now lives and works in Brooklyn, New York.

She earned a BA (Hons) in Fine Art/History of Art at Goldsmiths College, University of London; a Graduate Certificate in Spatial Information Architecture from RMIT University, Melbourne; and an MFA in Art and Technology Studies at the School of the Art Institute of Chicago.
Past residencies include the Marble House Project, Vermont; VCCA, Virginia; Museum of Arts and Design, New York; Pioneer Works, Brooklyn; and the Elizabeth Foundation for the Arts, New York.

Sophie has taught in the Department of Digital Arts at Pratt Institute as a Visiting Associate Professor; at NYU’s Martin Scorsese Virtual Production Center; and at Columbia College, Chicago, as a visiting instructor, and worked at Eyebeam Art + Technology Center.
Sophie has exhibited her artwork in New York, Los Angeles, London, Paris, Sydney, Tokyo, Osaka and Seoul. Her video work has been screened in festivals including Transmediale, Zero1 San Jose Biennial, Trampoline, Frequency, Currents New Media Festival and the Japan Media Arts Festival.

Recent exhibitions include Transfigured at C24 Gallery in New York, Out of Body: Sculpture Post-Photography at bitforms gallery, New York, and Machines for Suffering (solo) at Bannister Gallery, Rhode Island College, RI. Her work has been supported by the Australia Council for the Arts, the Experimental Media and Performing Arts Center at Rensselaer Polytechnic, and other private funding bodies. Her work is held in public and private collections in the United States and internationally.

Her public artworks have been commissioned by the Art Students League/NYC Parks, Google, and Lincoln Center. She is a New York Foundation for the Arts Digital and Electronic Arts Fellow.

Aisling Phelan is an Irish multidisciplinary artist exploring digital doubles, speculative futures and human machine interactions and entanglements. Through 3D animation, AI, video, sculpture, and live interactive technologies, her work explores what it means to be human in an era of rapid technological advancement and pervasive algorithmic influence.

Drawing from a transhumanist and speculative fiction perspective, Phelan explores how far we are willing to go in the pursuit of self-optimisation and the potential costs of such advances. Fusing the intimate with the artificial, her practice confronts the seductive promise of transcendence and enhancement, creating space for reflection on the role of current digital infrastructures in shaping how we understand ourselves and others.

Paul Vanouse is an artist, SUNY Distinguished Professor and founding director of the Coalesce Center for Biological Art at the University at Buffalo. Interdisciplinarity and impassioned amateurism guide his practices. His artwork employs molecular biology techniques to challenge entrenched notions of individual, racial, and national identity, and the cultural authority of DNA. His work has been supported by Creative Capital Foundation, Rockefeller Foundation, National Endowment for the Arts and New York Foundation for the Arts and has been exhibited in over 30 countries and widely across the US. His multi-sensory artwork, Labor, was awarded a Golden Nica at Prix Ars Electronica, 2019. His recent book, Difference, Sameness and DNA: Investigations in Critical Art and Science chronicles over two decades of his artwork and scholarship.

Venue

NeMe Arts Centre, Corner of Ellados and Enoseos streets, 3041 Limassol, Cyprus

Programme

  • Opening: Friday, 9 January 2026, 19:00-21:00.
  • Seminar: Saturday, 10 January 2026, 19:00.
  • Exhibition Duration: 9 January – 6 February 2026.
    • Exhibition Opening days/times: Tuesday-Friday, 17:30-20:30.

Notes

  1. Shoshana Zuboff. "Big other: surveillance capitalism and the prospects of an information civilization." Journal of Information Technology, 2015.
  2. Raji., & J. Buolamwini. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society.
  3. J. Buolamwini. & T. Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 2018 Conference on Neural Information Processing Systems (NeurIPS).
  4. C. Garvie, Hamm, M., & A. Henry. Equitable Facial Recognition: A Study of the Policing and Surveillance Implications of Facial Recognition Technology. Center on Privacy & Technology, Georgetown Law, 2016.
  5. J. Dastin. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, 2018.
  6. A. Howard. (2020). "The Limits of Facial Recognition: Assessing the State of the Technology." AI & Society, 2020.
  7. S. Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
  8. P. M. Regan. Legislating Privacy: Technology, Social Media, and the Law. Springer, 2015.
  9. K. Crawford. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
  10. R. Kitchin. The Data Revolution: A Critical Analysis of Big Data. SAGE Publications, 2020.
    K. Lum, & W. Isaac. To Predict and Serve? Significance, 13(5), 14-19, 2016.
  11. Kate Conger, Richard Fausset, Serge F. Kovaleski. "San Francisco Bans Facial Recognition Technology." NY Times, 14 May 2019. nytimes.com/2019/05/… o.html
  12. L. Gitelman. Raw Data Is an Oxymoron. MIT Press, 2013.

Funding

This project has been partially funded by the Cyprus Deputy Ministry of Culture. This communication reflects the views only of the author, and the Cyprus Deputy Ministry cannot be held responsible for any use which may be made of the information contained therein.