On the 25th of January, 2018, scholars and individuals from inside and outside Royal Holloway’s Department of Media Arts came together for the first Curating Machines workshop and lecture with artist Erica Scourti. In this post, I have tried to capture, in my own words, something of the day for those who missed it.
As an opener for her workshop, Erica asks participants to make a note of their last three Google searches and leave their bit of paper down in the middle of the floor: “What’s a mooncup?,” “Donald Trump,” “open dialogue,” “Novogenix fetal tissue,” “suicide prevention” followed here by “for a friend” and, “Should I ask him out on a date after having sex?” read a few of them, disclosing that the year is, indeed, 2018. Erica’s workshop – like all others in the Curating Machines series – will last three hours, with the artist talking, in this case, a little bit about examples of her work and getting us to use some of the tools she employs in her practice, while thinking, at the same time, about a number of questions which are central to it.
One of the first things that she brings up, as she asks us to perform a Google image query using a picture from our camera rolls, is Harun Farocki’s concept of “operational images.” This concept is important as it attempts, among other things, to describe the way in which, in algorithmic image retrieval, images become, for machines, something which can be read and analysed for information. Erica quotes here Benjamin Bratton: “Machine vision is arguably the ascendant ‘ocular user subject’, not the human,” he argues in “Machine Vision.” Machines are now everywhere beginning to see; colours, shapes, patterns and texture are all readily recognised by them. Having uploaded a photograph of my plant taking a bath – close-up and all misty – onto Google, I’m surprised to find out that my search comes back with nothing but images of plants of the same species as the one in my picture. I’m not used to experiments like this one, and the engine proves to be way smarter than I imagined. And yet the ‘tag’ Google offers to describe my photograph is “leaf,” which is a lot more generic than the ‘related images’ the same tool has identified. Is its vision better, smarter even, than its use of words? What is the connection between the two? All questions Erica wants us, I think, to consider. “Who decides on the text?” she asks.
In her own work, Erica has used Google’s reverse image search in a number of different ways. As part of So Like You – an experiment from 2014 she shares with us – the artist uploaded a set of old, personal images for analysis, and proceeded to contact a few of the strangers whose images had a “similar visual footprint” to her own. She then asked the individuals who responded to provide her with a third image, one they thought was similar to the initial one from her own set. That third image, filling in the gap formed by the distance between the two pictures which had originally connected the artist to that other person, now served as a conscious, truer link between them. In So Like You, then, a bunch of people first “connected by visual algorithms” alone, become linked in actuality via a chain of emails and a creative process. “For me,” Erica says in a post about the work published by The Photographer’s Gallery, “this tension between similarity and difference, singularity and multiplicity, authenticity and faking…is one of the hallmarks of socially-networked existence.”
As she asks us to perform a different exercise, one beginning with text this time, she talks about another of her works, Dark Archives, from 2016. The term ‘dark archive’, she tells us, refers to archives which, in contrast to ‘light’ ones, cannot be accessed, despite the fact that they exist. These restricted-access, “shadow” archives of metadata function, she further explains, as emergency ones, to be accessed in the event of loss of their double, publicly available self. To the extent, however, that dark archives are not accessible, they are, she remarks, invisible.
This “invisibility” is one of the things that Erica’s Dark Archives attempts to investigate. The work began, she describes, with her uploading a large, personal media archive to Google Photos – a restricted-access sharing and storage service – thus creating a dark archive of her own. She then shared her archive with five writers she had never met before and – taking advantage of the service’s automatic organisation and labelling feature – asked them to search through it using keywords of their preference so that they, as a next step, could produce a video using the images that their search returned. The work gets, here, even more complicated: having produced a video, the writers were now asked to imagine the missing links or media from their set of images, and to come up with captions to describe the missing data. Closing the feedback loop, Erica finally used these captions to search through her archive again, and create another series of videos, combining the captions with the images they now returned. The question of invisibility, initially asked in relation to the problem of access, now took on a different hue. Problematising the way in which Google Images had organised the uploaded archive of media by coming up with alternative groupings, Dark Archives suddenly raised the more complex question of: “What escapes intelligibility?” This is something we must think about, “especially as,” in Erica’s words, “these machines become smarter and smarter.”
During the workshop and in relation to all this, she quotes art-historian David Joselit in “Against Representation:” “Since right now almost anything can be monetised or rendered as information,” he argues, “we are all harvested and profiled as information-capital. Occlusions and opacities might be a means of protecting oneself from such economic forms of alienability or alienation.” That something can escape intelligibility is, then, not always a bad thing. “If you are fully transparent,” Erica says, “you can always be read by power.”
Issues of privacy, visibility, and the self are ones that Erica reflects upon again and again – “Could you make fictions out of absences?” she speculates with ghost memoir, The Outage – and to which she returns also during her Curating Machines lecture.
In the one hour she speaks for, she manages to say and do so much that it is impossible for me to communicate it all here. One of many the points she makes, and which is stuck with me still, concerns emotion. In a discussion of her own Screen Tears, she begins by referring to a long tradition in art of representing emotion with examples including here Bas Jan Ader’s well-known I’m Too Sad to Tell You, and works by Andy Warhol. The latter, she argues, is particularly good at illustrating how “anything that’s reproduced again and again loses its individuality.” I find this very interesting. Earlier in the day, while talking about memes, Erica had replaced the image, in her presentation, of one featuring Donald Trump with the caption “Donald Trump meme;” she refused to show the actual meme on the grounds that “every time you replicate an image, you replicate its power.” Emotions, it seems, escape this logic – always caught, as she says, when represented, in a tension between performance and authenticity. “Intimacy,” she maintains at a different point, “is lost with the sharing of the private.” How can we know what the limits of sharing are or should be? Of course the self is, as Erica points out, itself always doubled in representation. In selfies the author becomes simultaneously “both image and witness” and all of his or her products acquire, when shared online, a logic of their own and “a meaning outside of the author.”
Incredibly interesting is also the point Erica makes about the logic behind cloud-based assistants like Alexa and Cortana. Calling attention to the fact that such assistants most often come with a female name and voice and are thus gendered as such, she describes how they, indeed, perform “roles of assistance that used to be fulfilled by mothers, wives, and secretaries.” Traditionally, she explains, women have been thought to be ideal for this kind of labour, for – being perceived as less intellectual than men, a kind of “inert matter” – they appear to possess a greater capacity for uninterrupted “mediation.” Despite all our alleged progress, women are, it seems, still spoken through. The more Erica speaks and shows, the more I learn, the more I fall in love with her practice…
In the final part of her lecture, she touches upon the apparent distinction between humans and machines, only to blur the boundaries once more. In what she calls a context of “socially-networked experience,” machines and algorithms now influence and participate in their user’s self-construction. “I am interested,” she says in an interview with Annet Dekker, “in how the technologies that we are entangled with are recording and archiving our lives, in particular in the traces that we make all the time without being necessarily conscious of it.” Not only this but all these invisible entities and structures have, in fact, a material existence – something which is at odds with the way we’re used to thinking about them. “All this data,” she says, “is stored somewhere” and, not unlike humans, “consumes energy,” albeit in the form of electricity. Quoting Mel Y. Chen’s discussion of Jane Bennett’s Vibrant Matter, Erica affirms that “affect” must be extended “to nonhuman bodies, organic or inorganic” and understood always as “part and parcel, not an additive component, of bodies’ materiality.”
– Lilly Markaki