James Parker: So thanks so much for joining us, Santiago. Could you maybe begin by introducing yourself just briefly, however feels right to you?
Santiago Rentiera: Well, yeah, I'm from Mexico, Mexico City, and I'm living in one of the most isolated capital cities in the world. And I also think it's a great place to be in Australia, despite of the time differences and all the logistics that imply flying in and out. But I got a scholarship at the University of Western Australia. This is an ARC scholarship on a project that is researching the cultures of automation. My research is mainly concerned with how automation is impacting the way that we manipulate and standardize representations of non-human sound.
Santiago Rentiera: So mainly I am using as a case study the Australian magpie sounds because I think it's a super cool bird that can do so much things that I thought like birds couldn't do before. So they are very social. They are also complex vocalizers. They can mimic. And so this inspiration of biology is a guide in my intellectual journey. Besides that, I have also been researching the intellectual history of listening and how this can be mapped to the scientific methods in bioacoustics.
James Parker: So, I mean, that's all sounds amazing. I kind of wanted to, I'm tempted to just say, let's talk about magpies for a bit, but we'll get to that. We'll get to that. I mean, what's your, like, can we, can you tell us a little bit about how you end up in that place, you know, working on this? Like what's your background intellectually, institutionally? I mean, are you, you know, an artist first and foremost, or, you know, a computer scientist?
James Parker: What's the sort of the training or the institutional formation that like ends you up here rather than somewhere else?
Santiago Rentiera: Yeah, I think it's a little bit funny because I'm like this kind of multiple hat type person or maybe a no hat. So I think, well, my starting point was music because my bachelor's degree is in music. And also due to the influence of my family, because I come from a family of musicians, formerly trained musicians. So I think I have like this musical enrichment in my childhood. So that influenced my artistic perspective.
Santiago Rentiera: But I think curiously as my parents wanted me to be like kind of this concert musician, I ended up in the sciences and now doing something that is more in between. So I did music and production engineering and then moved to the computer science field to a master's in computational science. in me of listening to birds. One of them is from the UCLA and the other one now rests in peace. But I am very grateful to his drive and impulse in the study of birds.
Santiago Rentiera: So I think that's one of the main reasons I ended up doing birds because before my master's, I was thinking about doing something more like kind of music interface design or this creative computing field. And with the master's I discovered that there was this field of the use of algorithmic techniques and sonic methods to study the communication of birds.
Santiago Rentiera: And yeah, that's where I came up with this model with the artificial neural network model, which is a few short model capable of dealing with various small data sets, which is one of the common problems with some of the species that are not, don't have labels or are very few recordings that are actually labeled, have labels. So yeah, so that's how I ended up doing the cross-disciplinary bioacoustics and computer science.
James Parker: Could I just rewind back a little bit to that, your supervisors or that context? So was it that you arrived in a computer science department with a musical background and you just so happened to find that in this computer science department, there were already people who are working effectively on machine listening, whether or not they were calling it that. Is it, is it, there was a, yeah, that was sort of something that was sort of sitting there ready and waiting and you were kind of inducted into? Is that what I've understood? Is that right?
Santiago Rentiera: Well, I think I simplified it a bit because, yeah, actually to get into that master's, it was kind of a bit of a journey in kind of talking to people because I was coming from music. So when I kind of enter into the computer science, I was taking us or auditing classes during my bachelor's in different topics of computer science. So then I kind of had to gain the trust of the people, you know, I know what I'm doing. So that's how I met the computer science researchers Tecnologico de Monterrey. This is the university. I did my master's in Mexico.
Santiago Rentiera: And one of them invited me to apply to this master's program funded by the ARC of Mexico or the equivalent. And there, two of my supervisors had this long ongoing project connected to the UCLA research. So the UCLA researcher is Charles Taylor. So he started this group of Charles lab, the study of birdsong and using sonic methods to develop techniques to map and understand sequences.
Santiago Rentiera: And then Edgar Vallejo, who was my supervisory in the master's was the only one I think at that time in the faculty that had this project crossing biology, acoustics and machine learning.
Santiago Rentiera: So, yeah, it was pretty unique. I wasn't even expecting that, because you apply to the program but you don't have, you don't choose your supervisor until you kind of submit your proposal, kind of like six months after. So, yeah, and I didn't know, I didn't know Edgar before.
James Parker: So what sort of year was that?
Santiago Rentiera: I think I finished master's around 2019..
James Parker: Okay, so this is, this is all post sort of deep, deep learning, right? You know, this is like quite an important period in the history of machine listening where suddenly everybody's transitioning across to machine learning techniques sort of at en masse. That's precisely the moment you're doing it, is that right?
Santiago Rentiera: Yeah, I think the well, I wasn't very aware of the field itself because everything was machine vision and even people in the department treated classification of of bird sounds as a problem of machine vision, because they pretty much turned the sound into spectrograms and that was like, oh well, you just have to just read it as an image. They actually didn't, didn't have listening skills, so it wasn't listening at all. For me it was like just doing this data analytics.
Santiago Rentiera: I think the moment when I began kind of studying that historical progression, when it became listening or when it stopped being listening, was until I kind of wrote my PhD proposal where I had to actually make that argument that there's a gap in knowledge and there's this transition from sonic methods and notations to spectrograms which no longer require, you know, certain listening skills.
James Parker: I do want to move on, but can I ask just one more question about that period, if that's okay, because I'm really, I'm just really interested in, I guess, what it felt like or what the understanding of the institution was around that kind of work did like. Was it that you guys understood yourself as doing something extremely marginal and esoteric at that time and now it's like less so? Or were you, you know, was this an introduction into an enormous field of bioacoustics? You know, or you know, you mentioned, you know the changing techniques.
James Parker: But I just kind of interested to know, yeah, when a field feels new or part of a longer history or, yeah, how it felt to be in the field at that particular time. Because I've just the context for that question is is we've been trying to sort of track how machine listening fits in relation to a broader, you know, field of bioacoustics.
James Parker: Basically, like you know, and one thread is that it seems like people in very early days of machine listening, were interested in, for example, whale song in particular, and I just it's just never been quite clear to me as to like, is you know, was birds been there all along too? Or is this, you know, is this new thing that really only emerges in seriousness, you know, in the last five, six years? Or you know, how did it? How did you understand the work that you were doing at that time as a collective or working in the university together?
James Parker: Does that make sense?
Santiago Rentiera: Yeah, yeah, well, I think the well. The university had three areas of computational science. One was kind of like operations research, management, data science, which was more like kind of using the methods for business. The second one was kind of adaptive systems and kind of a more like bio-inspired techniques. And I think the last one was like just general machine learning, which was mostly dedicated to tasks, tasks of vision and anomaly detection.
Santiago Rentiera: So the one in which my group fit was the bio-inspired ones and there were a bunch of researchers there working or collaborating with the Medical institute in problems like kind of medical machine learning by detection of cancer or other kind of diagnosis techniques that relied on big data. And there was also a very interesting group doing bioinformatics, which was, kind of thing, one of the most solid groups that had.
Santiago Rentiera: This group well, division had and they were pretty much doing sequence analysis, alignment, a bunch of different techniques that I think are kind of generalizable to the study of other types of sequences, like linguistic sequences or sequences of sounds. So yeah, I was the only one, with Edgar and also his other students, doing this approach to biology using computational methods. But I think machine listening was in the core, the core component, they, they pretty much use listening as a medium, or extracting information that wasn't available to, to the other sense by the cameras or due to the occlusion of, of some of the birds, or some of the birds are too small, or, you know, all those challenges.
James Parker: Should we turn to magpies? You know, could you tell us a little bit about the magpies project that there's a lot to cover? Yeah, I mean, I don't know, I was really interested when you said, at the beginning that magpies are particularly interesting, or I think you might have even said Australian magpies. And I'm aware that you're Australian magpies, but I don't really know much about like, the broader magpie world, you know, so I don't know where you think the right place to begin is in terms of describing the project.
James Parker: Is it like, you know, flesh out that point about magpies in relation to other birds, or, you know, wherever you'd like to begin, I just, yeah, just want to talk about that project, really.
Santiago Rentiera: Yeah, I think, well, my first encounter with magpies was a friend advising me not to feed them when I arrived to Australia, and then followed by this anecdote of a friend of a cousin or whatever that got souped, and kind of ended up in hospital. So it was like, yeah, these birds are evil. And that was kind of my first image. But I have haven't been souped yet. So, so I don't fear the magpies. And I think actually played, I made a natural difference when I actually encountered the project, which was pretty much chance because I arrived here to Perth.
Santiago Rentiera: And then I met Amanda Ridley. She has been research and doing behavioral ecology, observation of different animals and birds in the wild. And one of the projects was magpie research project, and they are interested in the comparative behavior of magpies because is one of the one of the birds that they breed cooperatively. So they don't necessarily split like in pairs or couples and take care of their own kind of keen, they gather in groups that might not be necessarily keen. They take care of each other, and they do the breeding.
Santiago Rentiera: So that's one of the aspects that motivates his research, in their case. And when I first met Amanda, she was interested in using machine listening to observe and extract meaning from all these recordings that couldn't be manually screen or listen in order to give support some of the hypothesis that they had, which is that the magpies can combine different types of calls, and then compose meaning that helps to regulate the behavior of groups.
Santiago Rentiera: So this kind of group complexity is in a way the pressure that creates the need or need of a complex communication system, which is a use of call. So for instance, you have an alarm call and a recruitment call. And this combined them, this could be like a mobbing call. So, or you have a call for a ground animal and I call for alarm, and that this could be like someone is running on the ground. So those are, I think, studies that they are interested in.
James Parker: Yeah, can I jump in there? So it sounds like you're saying that there was already an archive of audio that had been collected by, I guess, were they even bioacousticians? Or they're just people studying magpies for whom audio was one obvious kind of way of studying them? And if that's the case, like, I mean, how are the recordings being collected? I've done some reading that like shows that it seems like passive acoustic monitoring is quite a kind of common phrase now.
James Parker: And, you know, there's lots of like pictures of these low powered microphones attached to trees. And, you know, they're meant to be sort of discrete, and so on and sort of blend into the environment. So I'm just wondering, you know, what was the form of data collection that was going on and into which you entered?
Santiago Rentiera: Yeah, I didn't do the data collection. I received the recordings and a set of annotations that well like, I cannot well disclose, but because the data set is meant to be for private use, at least for now. But the way that the some of them were recorded wasn't necessarily passive, because they they have to have it through the Mac Pis and they have to create a community and they have to ring the Mac Pis so they can identify which individual is singing, because it's relevant to know if the individual, the repertoire of calls- is changing across time.
Santiago Rentiera: So they do the longitudinal studies and also is relevant to see how they interact between each other. So sometimes they respond to the calls and then you can associate the recordings and see if they respond to the same type of call with a another familiar call and so on.
Santiago Rentiera: So, yeah, I wouldn't say it's, it's passive, and previous previous studies of the same group have also developed their own communities and I between them in order to perform other types of tests, like cognitive testing, where the Mac Pi has to kind of decide between two types of food or two types of stimuli, you know, associate the colors and so on.
James Parker: So yeah, and so you arrived with this archive of recordings and you're told- not being in magpie expert yourself- such and such a recording is an example of a magpie doing or saying, saying, quote, unquote. You know this. Like there's a- I think you said a ground, did you say a ground predator or something? Yes, sir, so there's a there, just to be clear. There's like a whole kind of acoustomology of magpie calls and there are people who… I don't mean this like cynically, but who claim to know what, what, with quite a large degree of precision, what magpies are trying to communicate through their vocalizations.
Santiago Rentiera: Yeah, I think that I should have made a bit more precise, but there's, there's a big discussion on that, on the meaning right there, the meaning on the causality of the calls, and you know the radical interpretation, because we are not magpie, so we are just collecting this lexicon or a bunch of sounds, and also the problem of segmenting them and defining what is the sound atom, because there are no phonemes. So that's that's a big issue.
Santiago Rentiera: And, yeah, they don't claim that they magpies, really mean that they just associate some behavioral observations, do the sounds that are produced and the context, so it could mean a bunch of different things. That's why they are trying to use the machine learning methods in order to assess if there's some significance to that kind of correlation that they are making or that the observations cannot be cannot justify the hypothesis of those calls meaning this that from a behavioral point of view.
James Parker: So this is super interesting. So I mean obviously that's like a big question but like a lot is at stake in it and a really important for thinking about the role of machine listening in relation to this. Are you saying that machine listening is being invited to like, confirm or disprove existing theories about magpie calls?
Santiago Rentiera: It was.
James Parker: Was that the invitation? Like we think XYZ, would you be able to sort of run the data and produce some kind of response to the existing theory? Is that, is that? Was that what you're saying?
Santiago Rentiera: But the first year, the first invitation, was that first I wanted to experiment with the archive, didn't necessarily want to do kind of scientific research with the archive. So my interest was: let's use this archive for something, and that's where I probably later we'll talk more about the- an archival use, which for me is an archival use because I I completely disregard the, the scientific part of that. So but they are they.
Santiago Rentiera: The focus of the research is that they study the call combinations and they have been using different computational methods in order to give support or empirical support to the hypothesis of: do magpies combine sounds meaningfully or what the combinations mean, or like sound sonic variations that have kind of behavioral correlations or cues?
James Parker: Yeah, I mean it, it's. It sounds like it's a really important segue into your work, because your work, which we'll get to in a second, is a kind of rejection or critique of that, or at least a bypassing of that as a kind of as the framing question, it seems like. I mean, to give another example, I read a paper recently about, I can't remember where the sheepdogs were from, but there was a group of scientists trying to train a neural net to recognize sheepdog vocalizations. And so they had the sheepdogs like in a sort of an agitated state and a friendly state. So there was like seven different sheepdog states. And then they were trying to, yeah, like they made all these recordings.
James Parker: And then the idea was that they would be able to have automatic identification of like sheepdog vocalizations. And in the paper that I came across this from, the idea was that this would be used to produce sheepdog vocal synthesis in order to be able to communicate with sheepdogs. And then that was being used to make an argument that we could in principle use animal vocal synthesis to communicate with any animals. And we would do this in the context of the quote unquote fight against climate change.
James Parker: And so there was like a sort of a cascading logic being unfurled where you begin with, well, I think I know that that dog sounds a bit angry, to well, we can probably work out what dogs mean. And if we can work out what they mean, we can work out how to speak to them. And then we can communicate with them at scale through automated and embedded like microphone and network systems throughout the environment. And so, yeah, that was a bit of a wild paper to come across.
James Parker: And it sounds a little bit like you might be, have a similar kind of critical impulse or something towards that kind of literature or that search for meaning or the presumption that there's a kind of meaning there and the kind of logics that that, or the pathways that that kind of thinking takes you down.
James Parker: So it's a big question, the question of whether animals mean or communicate, but it sounds like you have a position and that your work is kind of inspired by or motivated by thinking about that in some level, or perhaps I have it completely wrong and there's a totally different motivation for your work, but maybe let's segue into talking about your work at the very least.
Santiago Rentiera: But you want me to give more an opinion on that or?
James Parker: I don't know, like if that's a factor, it sounds like you have an opinion, I could be wrong.
Santiago Rentiera: Oh yeah, well, I think I sent the last email that I sent was like this question that I keep getting since my master's thesis, which is like, if we could develop this big data translator of sounds, of animals, and then speak to the animals, and this is kind of like the Solomon's ring or Solomon's seal power.
Santiago Rentiera: I was going to write a paper on that and then I didn't, which is like this kind of myth of speaking to the animals and also thinking that animals do these assemblies and have their politics, the conference of the birds and all these myths, and how these myths are kind of re-enacted in the computational scene, and used as narratives and arguments to convince that we can gather more data and ultimately translate the animals.
Santiago Rentiera: So I think there are nuanced views on the bio-semiotics of animals and how we can talk about to a certain extent of indexicality and perhaps not symbols, but there's some indexicality there. There are also methodological, I think, limitations on saying that animals have language and the way that language is actually studied by linguists is a different kind of thing from what the animal communication people are doing, even if there's this field in between called animal linguistics, which is where all these compositionality hypotheses are happening.
Santiago Rentiera: But I think you can see the same in the study of DNA bioinformatics where all these, the letters of the genes were combined to mean something. So I think the narrative is very broad and the extent of meaning and what we mean by meaning is something like very vague that I don't think I'm engaging directly because I'm not doing philosophy of meaning.
Santiago Rentiera: So for me, I think what I want to do with the work is stage kind of these encounters with the machine that can synthesize sound or that can listen and kind of question this idea that we will be able to ultimately translate something and also open the notion of the archive of these collections that don't have a clear index because they, or the index is completely arbitrary because you know, the encyclopedic kind of drive of classifying things didn't succeed.
Santiago Rentiera: And now we are seeing that we have these models that are being fed uncurated data and creating all these kind of outrageous responses from the public, so.
Santiago Rentiera: That's why I think archives are interesting. I would like to engage more with the archive also as a way of accessing collections, information, and dispel this myth that the AI can understand or mean anything, which I think is a broader goal than what the magpie research is about.
James Parker: Start with magpies, and then you take down AGI. Exactly.
Santiago Rentiera: The magpie is actually a fascinating animal that I think should be respected on its own. Sometimes I have these conversations with the AI people that they are like, we'll have bird-level intelligence, and it's like, what the hell do you mean with that? That's why you need to be in a biology department, and then they know that they won't be able to classify all the calls because they keep inventing new calls. Some of the birds can mimic other birds, and they can just create repertories endlessly.
Santiago Rentiera: They also sing, and they might be singing for pleasure, so there's no ultimate evolutionary explanation to know. It's a big mystery, and I think one needs to be a little bit more humble in having these universal explanations of things.
Joel Stern: Santiago, can you just say a little bit more about inventing and composing new calls and singing for pleasure? Because I'm just thinking, obviously, your training as a musician comes into play when making a claim like that, potentially, but also the resistance to indexing and classifying something that is invented and possibly has an aesthetic dimension rather than can be purely semiotic comes into play there, too.
Santiago Rentiera: Yes, surely. I think I'm perhaps abusing of my aesthetic appreciation of birds. I don't think I necessarily appreciate them because they intend to be appreciated as bird musicians, but I think it's an interesting mode of thinking. Why do we create music? There's this evolutionary argument that music play this social cohesion role, and then one can translate this into the birds that they are bonding with sound, but that's also the argument that when we sing or when we are playing for ourselves, it's like this loop and entrainment with our own voice.
Santiago Rentiera: So I think that it doesn't have to have any meaning, like the music. It's like music doesn't have to point to something beyond music. Music can be something like sound in itself, but then there's also music as communication. So I guess I resist to commit to one idea. And also, you mentioned that this resistance to classify and index things. I think that also applies to music. It's like when we have all these classifications of styles and genres and what Spotify is doing.
Santiago Rentiera: It's similar to what to some extent libraries like the Cornell Bird Archive is doing, kind of try to archive and classify all the sounds and then correlate them. So yeah, my concept of an archive is resisting that universal knowledge approach.
James Parker: So let's talk about the work then. I don't know how you think of it. Is it an art project or a form of... I don't know, maybe it doesn't matter how you think of it, but perhaps you could describe a little bit about the project, however you feel is right. Oh, yeah.
Santiago Rentiera: Well, the project consists of one part on the intellectual history of listening and body acoustics. And I trace a history of listening techniques and also reproduction techniques like whistling. There are a bunch of interesting papers of how imitation whistling was used artistically, but also as a way to record the songs of the birds of the environment. And also used by hunters to call the birds and then attract them and then hunt them. So in a way that was inherited by the ornithologists when they wanted to attract a bird in order to observe it or potentially to capture the bird and then kill the bird in order to study the organs of the bird. Then you had that kind of listening skill but also sound production skills. So that's one part that kind of line of history and how now there's this embodiment of listening and also vocal production.
Santiago Rentiera: And the other aspect is how you can use these same techniques to create an experiment with the limitations or the boundaries of archives, of what cannot be recorded in the archive, what escapes recording in terms of meaning and in terms of also expressivity. So one of the ideas is the vocal puppetry metaphor which is about just training this machine learning technique to synthesize sounds of a bird. But instead of using it to reconstruct the sounds, using it to kind of match vocal templates like whistles to the closest sound in the archive.
Santiago Rentiera: So it's a way of information retrieval that instead of using a verbal index, like say tags, words, like in a search engine, you are using something non-verbal.
James Parker: Just to be clear, I would sort of hum a melody or whistle a tune or sing Bohemian Rhapsody and the algorithm would produce a kind of a form of mimicry or something that is in magpie, quote unquote, but I don't know exactly what I mean when I say in magpie, like the form of index or something that's being produced. Cause it's synthetic. So it's not a indexing, is it? It's not like a matching, like here's a particular piece of audio that is being pulled out and then paired or replaced. It's a new, never previously existed magpie sound. Is that right?
Santiago Rentiera: Yeah, I think, yeah, well, that's kind of the idea. I wouldn't say it's translating anything, but maybe I could play with that in an artistic way, like absurd way. But yeah, what is happening there is like, they call it timbral transfer sometimes, or they call it reconstruction. Like with this model, the variational autoencoders, for instance, you pretty much approximate a function that can reconstruct the input.
Santiago Rentiera: So it's pretty much like compression, but instead of reconstructing the input that was compressed, you are using another sound to query a chunk of this compressed representation thing and then retrieve it. And that's what creates this kind of mismatch between what the device was designed to do and the meaning that we attach to it, that we think that perhaps the easiest way of explaining it is like a translation, but it's not a translation.
Santiago Rentiera: It's some kind of decompression process that will give you the closest match in terms of the objective that the machine was trained on, because this was trained on the similarity objective, like the reconstruction has to be as close as possible as the original one. So that's what the machine will do in the end. We'll try to give you something that is similar using the bits and pieces of the compressed MacPy sounds.
James Parker: And what's the sort of critical or artistic purchase for you? I mean, you've talked about the Anarchive and you've already sort of gestured at it, but what's the, it's a very crude to say, like payoff or point or something, but like, yeah, what is it that you think that this process of vocal puppetry is kind of showing or opening up or examining?
Santiago Rentiera: Yeah, I think, well, perhaps when people think about the Archive, they just mean a random collection or maybe the theorists will try to map it to Foucault, you know, this system of transformation of statements. And I think there's also another word, or similar concept by Zielinski, which is Anarchive. I don't think I'm using the same concept because this is more like a political undertone of memory, like the memories that were not officially called Archives.
Santiago Rentiera: by the institutions. So I think my archival approach is closer to the techniques, which is more media archeological than, I guess, historical. I think I'm more interested in experimenting with the accidents of the indexes and retrieving sounds using non-verbal inputs and non-logocentric approaches that don't require this tagging, but also ways that we can also challenge the idea of the voice as something that has to be always human. We have this vast research on speech synthesis that is pretty much about human.
Santiago Rentiera: It's all about humans, so it's very narcissistic. And I think exploring what this non-human is of the voice using other archives that are not necessarily human ones can also perhaps add to this discourse on the non-normative voice of what is not original, but is generative in a way that is transcending the archive as something that encompasses everything. Yeah, I think I still have to think about what the artistic import itself, because I don't want it just to be like this data translation experiment, right?
Santiago Rentiera: So, which is what we risk if we just use it as another filter, like, you know, you could sell this as a Mac by filter to TikTok and people will do silly things with it, but- Karaoke clubs. Yeah, exactly.
James Parker: We'll take it up en masse.
Santiago Rentiera: So, yeah, so I think maybe also could be used to ways of listening other archives that don't require us to input things like, or especially knowledge, and then access the birds and enjoy the sonic complexity of the bird using other means that are not words. So, yeah.
James Parker: It's kind of anti-taxonomical, but like on quite a fundamental level, right? It's like, it sounds like Joel's point from before about, you know, the musical orientation or background is like really prominent here. Like, there's some level that there's a kind of a critical resistance to the kinds of things that all of the bioacoustics- But yeah, but it's also returning the data set to something sort of sensory, like it's like a sensory ethnography approach to this sort of, you know, bioacoustic data.
Joel Stern: And, you know, that's one of the things we've been thinking about a lot too, you know, recent sort of projects in which we've been listening to data sets, that just how it is quite radical to actually as a human sit and listen to these sounds at some length when, you know, they have been amassed and accumulated and not necessarily with human listeners in mind.
Joel Stern: And what we kind of can understand and sort of experience through that sensory interface is something different from an indexing kind of interface, even if there's some overlap and even if as human listeners, we're sort of categorizing as we listen in certain ways, there are other things going on. But I was just thinking about that process of tone transfer because obviously, you know, Google with like magenta DDSP and have been applying this sort of technique as an effect for transferring from one instrument to another instrument.
Joel Stern: So it's a sort of, and in your, I was listening to your SoundCloud examples and it was great because there was, you know, the whistle and then the magpie, the whistle and then the magpie, and then quickly it's followed by the magpie beat box. So, yeah, I'm just wondering if you can say a little bit more about the kind of experimental sort of horizons of these sort of techniques, like what are some of the ways that you imagine applying these sorts of techniques in kind of non-indexical, non-taxonomical ways?
Santiago Rentiera: Yeah, I like how you frame it as it's very sensory, almost auto-ethnographic approach of your own way of collecting bits and pieces. And yeah, it's totally anti-taxonomical. I think I like to do more stuff that involves less classification, you know, even if that's what I'm usually getting, like people that pay me to do stuff are like, I need to classify this.
Santiago Rentiera: Perhaps how I am most useful as a computer scientist, classifying and ordering things. But yeah, I think I find in this art accidents of the index something worth exploring. How could I, you know, take this onto the artistic expression or more like musical expression? I've been thinking about what other people have been doing with same technologies. I don't think technically, I'm doing anything new. This has been done by Holly Herndon, I think, with the Spawn, which is a timbral transfer.
Santiago Rentiera: She did this with her own voice and I think actually did a weird kind of decentralized organization blockchain thing in order to license the reproduction of her own voice in this kind of Timbrel transfer way. But I don't think I want to do something like that. I think that's what I'm.
Santiago Rentiera: My contribution here is thinking about this in archival terms and how the notion of these big collections that are going to just get bigger and bigger and bigger, how to approach them in interesting ways that are not necessarily like oh, the computer knows something, or like this big Oracle, big tech Oracle thing, or like the universal index, like kind of the Aleph on Boris or from the library of Babel, that you have these indices of things that are no longer human readable.
Santiago Rentiera: So I think the way of challenging the anxiety of things no longer being readable or humanly indexable is at this point where the art can enter and then the accidental retrieval can be used as a way of creating new situations, the data kind of embodying the data, drawing the attention of people towards some species. That is, perhaps pour it down this big archive and nobody knows about that recording.
Santiago Rentiera: And I didn't mention extinction, but I think it's also relevant to think about what's going to be extinct in the next years and these sounds that we won't be able to experience without mediation. So, like there's a very interesting case of the Huya: the only bird recording that we have is an imitation of a Maori elder. So we have this second order mimicry of a machine that mimicked the human, that mimicked the bird and maybe the bird was mimicking other sounds and that's how the bird learned the song.
Santiago Rentiera: So yeah, the notion of extinction and second order extinction, like when the materials of the extinct bird go extinct or forgotten. I think that's also another point that I would like to explore poetically, like disappearance and retrieval.
James Parker: That reminds me of another paper we were reading recently for our own project on the sort of bioacoustics meets machine listening, and maybe I could use this example to segue towards a sort of a broader conversation about this field, because you know you were saying before that the archives are going to kind of keep expanding and I've been like genuinely shocked by the- I can't think of another word than like- imperialist impulse of certain bioacousticians or their sort of interface with machine learning researchers, because some of the papers just that I've been reading just sort of suggest the kind of the most encompassing listening, you know, data collection that you know the NSA would bulk.
James Parker: You know, almost that's what I was like Snowden: we want to listen to everything all of the time in the entire biosphere in order to produce a kind of constant and perfect archive of all ecological sounds. That's sort of what it sort of reads like sometimes. So that's sort of where I'm heading. But in the context of that I was reading a paper about what was called acoustic enrichment. So this paper was about reef health and the idea was that reefs are extremely unhealthy.
James Parker: You know, the Great Barrier Reef, this was Australian research- but that if you, they were testing whether, if you could play health, the sounds of healthy reefs into reefs, that would improve the reef health and it- and it makes me. It reminds me of things I've heard occasionally in like conversations with architects or
James Parker: or, and so on, when they say, well, there's no urban birds anymore, but we could sort of just play the sounds of urban birds and everyone would sort of feel happier. And they mean it very sincerely.
James Parker: And so it just touches on this sort of point of extinction that you were making before, whereby like in the name of ecology and perfecting, you know, healthy ecological systems, we have this kind of weird simulacrum of healthy soundscapes being constantly reproduced, you know, with extinct species and so on in order to supposedly improve ecology and reduce extinction. And then all of that depends on, you know, this archival impulse or data collection. So it's just a bit sort of mind bending.
James Parker: Yeah, so I was just wondering if you had any comments on that, because it just seemed like the, what I've read in terms of acoustic enrichment sits very closely to what you were saying about, I don't know, the poetics of extinction or something.
Santiago Rentiera: Yeah, yeah, totally. I think I probably had come across a paper before in the context, I think, of assisted evolution, which is this idea that we can design different infrastructures that support beings that need to evolve but can't because of our own destruction and impact. So this one, yeah, I read that it was like a soundscape research thing going on there. And also I find very problematic the way that the soundscape health is assessed using these very abstract indices, right?
Santiago Rentiera: Like how do you even determine what is healthy with this very broad and incomplete picture, which will never be complete, right? Of what the animals are doing at the sonic level, right? So is it a matter of it being spectrally diverse or what? Right, so I think regarding the extinction question, yeah, about synthesizing or the simulacrum of the dead, right, which is like, I think I didn't mention it, but I have this concept of sonic reanimation, which is very close to what they are doing with reefs.
Santiago Rentiera: And I probably maybe should change the name so people don't think I'm actually trying to do that by reanimating extinct sounds and then just kind of greenwashing destruction. But yeah, I think synthesis of what has passed away is a topic and the copy also, how we use the copy as a way to alleviate decay. And maybe we also should think about how archives decay, right, like how they decay as not human memories, but in their own way, how we are no longer able to retrieve something because it's lost.
Santiago Rentiera: Like this is kind of the anxiety of the Library of Babel and also the Book of Sand, which is another story by Borges. Like this book cannot, you cannot return to the same page anymore. So, and then you never know what was the original one because the original and the copy maybe just differ in one character and things like that.
Santiago Rentiera: So, thinking about extinction and the visibility of extinction in big data, I think that that's a critical question for the concept you just mentioned, the planetary listening, how we'll become perhaps more aware or less aware of extinction with the increasing recording, which they're recording in the end is not kind of this God's eye view because the placement of the microphones and the decisions that people have to make in order to kind of put them in certain places that kind of obscures the universality, which in the end is not universal.
Santiago Rentiera: It's a very particular imperial view of someone that decided to put the microphone set where it's not the full, it's never the full picture. So, I think questioning that incompleteness of the archive and the things that are extinct or the awareness of extinction as a way of kind of extinction listening in the archive, I think that that's relevant from an ecological and poetic point of view.
James Parker: Does it feel like, you know, this is a growth area? Does it feel like people are investing money or asking you to be involved in, you know, bigger projects? You know, the reason I'm asking this is because a number of the papers I've read are directly, for example, in response to the UN Sustainable Development Goals. And they have this sort of tone of, like, salvationism, like, and an urgency, which is, of course, appropriate to the climate emergency. But there's a kind of, like, we've got to do this now.
James Parker: And there's a lot of, it does seem, you know, you cite in your paper, this piece by, this book by Karen Backer. She's just, like, ecstatically excited about, you know, the amazing and diverse and diffusion of planetary machine listening, basically. Like, all of the, every imaginable ecosystem is now being monitored and modulated and interacted with. And she's so excited. And is that what it feels like to be working in the field now?
James Parker: You know, that, like, there's energy being poured in, there's money or, you know, is, for example, is there industrialization on the cards? Because one of the things I'm interested in is, like, could a system, there's a lot of talk of, like, the green internet of things, right? But is there a version of this planetized machine listening that isn't totally and utterly dependent on Amazon Web Services or centralized data power, basically, you know? So I'm wondering, like, what the kind of, yeah, feel is like within the field at the moment.
James Parker: Is it, does it feel like how it reads?
Santiago Rentiera: Well, I think that's one of the reasons that I think I've been trying to see a way of AI as a concept. And I was very annoyed when I saw this letter and brilliant people saying it, like, you know, Joshua Benjus saying this letter that was written with hyperbolic language and trying to obscure other catastrophes that are there are perhaps more evident than the super intelligent machine doom fantasy.
Santiago Rentiera: And I think the way that this is being used to control technology and manipulate opinion, yeah, it's so problematic, but I think listening is not in kind of under the limelight as the notion of intelligent and language is because perhaps language is more relatable to us humans.
Santiago Rentiera: So even if we can synthesize every possible sound or, you know, map and recognize all the event classes that are necessary for a perfect universal surveillance state even that wouldn't create the same kind of doom associations as a intelligent machine, because, you know, listening by intelligence kind of experts or the artificial intelligence people is just treated as another modality, right? So it's just more data.
Santiago Rentiera: So I think they abandoned the drive of the computational auditory scene recognition that was like from the 90s trying to model how the cochlear worked. And they totally abandoned that with the deep learning because it doesn't work like a brain at all. It's kind of this massive approximation, hyperparametric devices. So they no longer use that rhetoric.
Santiago Rentiera: They kind of call it sound event detection and sound or scene recognition which obviously have their own taxonomical issues like what I think you mentioned, what is a scene, how a scene is even defined or how do we kind of determine what are the relevant categories and there is no universal index. I think research that probably should be done there to counter the excitement and the wonder of backer could be like linking this to the way that the military industry complex is also funding the projects.
Santiago Rentiera: Like I remember reading from Douglas Kahn beautifully written book on how this infrastructure or international monitoring system that was used during the 60s to guarantee the compliance of the nuclear test ban treaty is now used by well researchers to observe and also to explore other effects of human action on the deep sea. So the deep sea is already being monitored. All these whale research is also an entry point into deeper areas of the sea and resource exploitation and mapping the planet.
Santiago Rentiera: Things that are not only purely scientific nor conservation efforts, but are kind of strategic and have a purpose in the battle space.
Santiago Rentiera: So I have to say that I'm actually preparing a chapter on this notion of planetary listening. And it will include some of these kind of links between conservation and how the mapping of the planet and geography are linked to the strategy. And it's not only about the animals, right? The animals are kind of like the flagship kind of, you know, the argument is that we are trying to save the animals. But I think there's more to say about that besides the conservation.
James Parker: I'd love to read it because I'm also writing something right now on the same topic. I don't know. We've had a lot of your time already, Santiago. Is there anything else that, I don't know, you wanted to talk about? Or I don't know, questions for us or Sean or Joel, did you want to jump in at all on anything? It doesn't seem like it. It's great work, Santiago. Maybe we should wrap it up and say thank you and segue into a more informal conversation about how we might continue to work together or talk at least.
Joel Stern: Yeah, I'd like that. Thank you so much, Santiago. It's been great to do this interview and to learn about your work in more depth and to, you know, understand more about sort of what is motivating the research and the artistic practice. And I think we should have a conversation about ways to collaborate, you know, sort of creatively and in other ways too, and around how to sort of engage with these collections of sounds and sort of do more experimental work with them.
Joel Stern: Because especially as we've said over the next few months, leading up to an exhibition in August, we will be developing what will probably be a multi-channel sound installation with the working title of Planetary Audition. And it will be touching on many of the themes that sort of have come up in the conversation between us amongst other things, but we're sort of still trying to develop, you know, the material. So, yeah, I'm not sure what the best way to kind of foster a collaboration is, but what do you guys think, James, Sean?
James Parker: Sean is green, so.
Joel Stern: Go on, Sean.
[01:15:14] James Parker: I'm green. Your little box was green, so. And he's green, yeah. Sorry, I can't pick up from what you asked specifically, Joel.
Sean Dockray: No, I was thinking, I mean, there were just two things. Sorry, James. There were a couple of things that just stood out to me in the conversation that as being sort of, you know, that you identified Santiago too as the sort of artistically sort of rich, you know, or generative things. And, you know, was the accidents of retrieval and accidents of indexing. And, you know, one of the things I was wondering during the interview itself was a little bit like how those accidents actually come through or come up. I was I, I just couldn't find a way to like, I didn't want to interject. But a lot of that, like the instrument, the way that you kind of play with the archive, like to me, it's like a form of play rather than query, right? Which I love.
Sean Dockray: But at the same time, like the in the SoundCloud, it's, you know, like, the verbs that you were using in the conversation with James were like, transfer or translation, not necessarily that this is what it was doing. But those are the verbs that come to mind. But the one that came to my mind is like mirror, because you kind of like making a sound, and then it mirrors something back to you, right? And it's a little imprecise, but it's still this kind of relationship.
Sean Dockray: And so I was wondering, like, where the accidents, you know, like, if there's other verbs that make different kinds of accidents, and I feel like the beatbox is a bit of a funny thing. But at the same time, that's where it sort of like begins to become something else, too. So I was really interested in this question of the accident, not because of my own investment in it, but just because of like, the potential to kind of loosen up the kind of relationship of mirroring or transferring or translating to something else.
Sean Dockray: So yeah, just to riff on what Joel was saying about the accident a little bit. I wonder how, yeah, if we can like figure out ways, like strategies for that. I also love the reference to the Solomon's Seal, like not something I'd thought of, but like, actually thinking, yeah, like some of the sort of historical theoretical kind of things that came up in the conversation, like would be quite, like, rich to, I think, think through in the work as it goes forward, too.
James Parker: Is that something you've written about already, the Solomon's Seal stuff, Santiago?
Santiago Rentiera: Actually, it's what I think was a failed paper, because the paper, the, originally, the one that you received, the magpie puppetry one, was going to be about the Solomon's Seal. And I actually read Conrad Lorenz, a book that has a reference to Solomon's Seal. So I wanted to go deeper into this kind of mythology of the animal communication. But then I realized that the word count wasn't going to be enough. And I didn't want to burn it by doing like this very short paper thing.
Santiago Rentiera: So I ended up doing something more in practice, because it was also kind of, we were prompted to write something about practice, so not so much about history and theory. So I decided to change it, but in the end, it also didn't get accepted. So I still have that archived. There is perhaps a chapter that I will write at some point.
James Parker: I mean, so I don't know if you're noticing what we're doing. I'm just going to say it. I think we're, yeah, we are inviting you to collaborate in some way, potentially, and the degree of collaboration is open on this potential project.
Joel Stern: You know, and I was just thinking about the Solomon's Seal stuff as well.
James Parker: It was just going to be that, like, for example, the moment you said Solomon's Seal, I was like, this is incredible for the artwork. And we're not just going to like borrow or steal your ideas, do a whole bit on Solomon's Seal and just be like, cool, glad we spoke with you. But like instantly, I'm just thinking the Solomon's Seal thing situates what's presented always as novel and digital and so on within an imaginary of mastery that's not unrelated from the imperialism point that we were making before.
James Parker: And I really want, it's really that kind of, quote unquote, imaginary that, like, I'm most interested in the planetary, like, it's just wild. I just can't get over that. Like some of the things that they imagine doing, whether or not they're, like, going to happen, they're wild, they're mad.
Joel Stern: But also, just in terms of, like, the artistic format we established with Afterwards, which was really, you know, a provisional and first attempt at working together in this way. And the works that we produce in the future may reproduce some of that form, but also change.
Joel Stern: But there was a sort of episodic quality to the work and a narrative quality where different sort of, where storytelling is sort of quite important and in which a kind of speculative imaginaries are quite important to those stories, you know, and the blending of fact and fiction and historical material and sort of fabulation. So, I think, you know, there might be a way, even though it's been difficult to get it up as a scholarly text, there might be a way to rethink how to work with that in a more artistic context that makes a lot of sense.
James Parker: The other thing I was wondering, I don't know if this is the right time to say it, but have you made a, can you do...
James Parker: Does your thing work in real time? The magpie mimicry?
Santiago Rentiera: Yeah, actually I'm trying to run installation in November for... I don't know if you heard about the...
Joel Stern: The Cultures of Automation.
Santiago Rentiera: Yeah, the Cultures of Automation. I'm going to present something there and I'm trying to embed this algorithm in a real-time device called Bella. Well, it hasn't worked because the library... There are some bits and pieces that have to be compiled for the specific architecture. But this model that I'm using, which is called Rape, real audio variational autoencoder, works in real time. It can be used in an interactive fashion and that could open a lot of possibilities of not only using the voice, but using other input signals that are non-audible.
Joel Stern: I've noticed that the real-time tone transfer, as a hardware device, that there's quite a few people trying to...
James Parker: Because one of the things about the RMIT commission context is that it's a very, very different space to the one that Afterwords was in. So we've been thinking a lot about the installation element and what visuals or artifacts or things might be present in the space. And it just strikes me that one thing would be potentially to have something like your vocals. I mean, I'm not trying to say that definitely would work. But I could imagine something that was sort of interactive in that way being present. It sort of depends a lot on what happens.
James Parker: But I don't know if that's something you'd be open to developing.
Santiago Rentiera: Yeah, I want to reflect on the concept and also build something that can actually be experienced by people not only in academia. So yeah, that fits perfectly.
Sean Dockray: It could be interactive, not with viewing public, but something else too. There's interactivity and kind of real-time systems. I'm just kind of interjecting from the point... interactive artworks and seeing a few. Sometimes kind of like public interaction with an artwork is less interesting than other relationships that you can set up. So I'm not saying not to do it, but I'm saying that it could be that over time we develop more conceptually.
Joel Stern: There'll also be numerous public programs associated with this exhibition. So there's the possibility of having an installation artwork that has an exhibition form and then performances that activate that artwork in certain ways through interaction or, you know, some form of encounter. Awesome.
James Parker: All right.
Joel Stern: Let's leave it at that.