Audrey Amsellem transcript

James Parker - Thanks so much for joining us, Audrey.

Audrey Amsellem - Thank you so much for having me.

James Parker - Would you like to introduce yourself however you see fit?

Audrey Amsellem - Sure. So I'm a lecturer at Columbia University, where I teach in the music department. And I'm an ethnomusicologist working on sound and property. Okay, that's the headlines.

James Parker – And how is it that you go from, you know, being a musicologist to working on sound property surveillance? I mean, I'd love to know a little bit more about your backstory and how you arrive at the problems and concerns that you have. Is that a story you feel like telling? Audrey Amsellem - Sure, absolutely. Yeah, it was definitely a progressive route. I've been interested probably since I was a teenager in this concept of the possibility and impossibility at the same time of owning sound. This idea that there's this fundamental tension with sound. something that exists in the air and that is somehow subjected to property laws are actually quite rigid. And it, you know, music to me, particularly as a teenager, and I think this is the case for most people, right, that it feels like a space of freedom. And so any kind of constraint that was that is put on that was sort of an affront to my freedom, right? It became very political to me. We were chatting a little bit about this earlier, but I'm not a musician myself. I never really wanted to be a musician myself, but music was so important to me since forever that I always felt a sort of deep sense of injustice maybe towards the ways that various musicians are treated. And because I felt indebted to musicians, right? I felt grateful for growing up at a time also where music was so widely available.

So I was a teenager when piracy was all the rage. And so this was a very great moment to be a teenager because all of a sudden we had access to this enormous amount of music and it felt like it was endless and it felt so democratic in that particular moment of my life. And so you have these various things I suppose are converging. And that was really sort of starting point of my reflection around music and around music and power. So the power that music had over me, but also how power is exerted through various politics of access to music and circulation of music. And so then I was thinking, how do I make this my life? Which at first was more, maybe I'll go into the music business and, you know, try to help out musicians or maybe I'll become a copyright lawyer. And I moved to the US, so I'm from France originally, and I moved to the US and went to community college. And so I studied music at community college But I didn't really know what I was going to do with that and I wasn't even in a four-year college yet. And it was actually meeting, well, I didn't meet him, but it was going to a talk by George Lewis, who's an amazing composer who teaches at Columbia where I am now. That really, actually, he introduced me to musicology. I didn't know what it was. And I thought he was just so incredibly smart and inspiring. I thought, you could do this kind of music philosophy. And I just saw, you know, how interesting is it to think about music this deeply and this intensely. And so that's how academia came to be, you know, little by little, I would say. And, you know, first, I started working on piracy when I was more of an undergrad and sort of various politics of circulation of music. And then that translated into no thinking more strictly about copyright law and then thinking about various ways in which sound is constrained by by laws and then so surveillance came sort of progressively through that route.

James Parker - That makes a lot of sense and listening to you speak I'm suddenly thinking I wonder if you/we were the first generation for whom the mode of circulation of music was sort of really really highly politicized and present and continuous with our encounter with music per se. I just don't remember my dad or like even myself as a child, talking about records or tapes or CDs as a political object, as part of their very encounter with music. But the way you’re talking makes me think that, from the beginning you learned to love music through the medium of a conversation about piracy, freedom. Of course, those issues are always there. Of course, the CD is a politicized object, an object of enclosure, and you can go back much further… But I don’t feel like this was as much a part of the public conversation. I mean, I’m making this up, but the way you described it, that really jumped out at me. How could you not think of music and property together, growing up when you did, and encountering music through file sharing services. Of course, there’s a whole generation for whom that’s true. I’m riffing…

Joel Stern - But even in the 80s, sort of in the period before the internet, I remember, you know, the messaging that home taping is killing music is destroying the music industry. And there was all the anxiety around people recording things off the radio and circulating it between themselves through mix tapes, even though that was just a sort of grassroots, anti-commercial kind of culture. Yeah, the growing up in that sort of era of Napster and you know, Soulseek and kind of platforms like that really was a sort of inadvertent political education.

Audrey Amsellem- Yeah, but I do think you could probably retrace various important moments in history in which music and as you said the way that music is distributed actually impacts its sort of political appeal in some sense. I'm thinking also in the 50s with the radio and and rock and this idea that the first time because you have the growing middle class, the first time teenagers are able to buy their own music also. And that is sort of a move, rock becomes this anti-conformist and rejecting the traditional values of the parents through buying Elvis, like the white middle class Americans buying Elvis records as actually a way to form their own identity that is very radically opposite of that. parents. I think there are these moments in history that you could probably pinpoint to and that are deeply tied with technology, obviously.

James Parker - Yeah. I’m now thinking my instinct there was a bit naive. I had naturalized radio, like the whole the relationship between pirate radio or just youth radio per se and rock music, obviously, was absolutely vital. And we're only talking about the West here and you can think about the role of radio in so many other cultures. But it is still interesting in any case that for you music and property relations sort of come together. You know, and obviously you can see how that bears out in your thesis.

So yes, we've read your brilliant thesis at Columbia. Was it in the musicology department, ethnomusicology? Yes, ethnomusicology. So it's a music department, but within the music department, ethnomusicology is for a subfield. Right. And it's sort of all about, on the one hand, the politics of ownership, the history of recording and so on, but very much kind of situated in the present around, I guess, I mean, how would you summarize it? It's your thesis. I was going to say the relationship between sound and surveillance or something like that without getting too much into the details, which we'll obviously do.

Audrey Amsellem - Yes, so the dissertation is a series of case studies. It's three case studies. First one is Spotify, then it's Alexa, so voice assistants in general. And then the third one is an object called Link NYC, which is the sort of communication hub that is part of the Smart City Initiative in New York. And picking these three objects, the idea was to try to show the relationship between various forms of sonic properties that are historically informed and legislative, and how these various forms of sonic properties have actually informed surveillance capture. and how these three devices are actually sort of products, right, of surveillance capitalism in the way that they are listening to people and collecting massive amounts of information about people. And I was trying to show in this project the relationship between particular specific notions of Western property that have been enacted through the history of the West and surveillance capitalism and show how there is this relationship of continuum, and how sound and music is relevant, let's say, a way of looking at the history. It's not the only way. But I think it's a particularly poignant one, it reveals particular things about about the way the West actually conceives of property in a more general way.

James Parker - And so you end up saying that, or diagnosing the present or, I don't know if that's quite the right word. In terms of the ‘neoliberal ear’. So you say that these three different case studies are all kind of exemplary or productive of this thing you call the neoliberal ear. So maybe as a starting point, just to kind of inform our conversation about the case studies as we go on, could you say a little bit about the neoliberal ear? It's very suggestive. It's quite close to, you know, in some ways, the topics that you're describing are quite close to what we've been investigating with machine listening, but obviously, like the neoliberal bit is quite, it just has a different inflection. So, I just love to know what you're thinking of when you talk about the neoliberal ear and the consequences it has for your analysis or maybe the opposite, like how you arrive at the neoliberal ear through your analysis.

Audrey Amsellem - Sure. Yeah. So, the neoliberal ear is this concept that I developed, which is defined by is defined by being the set of listening practices within surveillance capitalism. So the various ways in which surveillance capitalism listens, the various ways in which it can be a listening entity. And that specific way of listening to the world, it doesn't arise out of nowhere, right? And it actually predates neoliberalism. So it's called neoliberal era, but this modality of listening is a consequence of colonial ideologies, of collection and dispossession and extraction. So it's the way that the West has been listening through this tripartite process and rooted in colonialism and in capitalism and rooted in the idea that Western powers can sort of come into, when we're talking about colonialism, we're talking about a place, we're coming into a place that isn't theirs and sort of observe it and seize and dispossess, right? And this is where the framework that Dylan Robinson, who's an ethnomusicologist, And so this is a framework that he develops in his book called "Hungry Listening." And that framework is crucial because he deconstructs the constant state of starvation of the colonizer, which I use his framework also to compare that to the constant state of starvation of tech companies for data. And so that's more of the historical part of it, but then you have neoliberalism that emerges and this idea of the free market profit seeking over public good, like valuing profit over public good, individualism, the emphasis on that's more actually something Robin James talks about the emphasis on the quantifiable and statistical and the sort of creation of a normalized subject.

So the ideology of neoliberalism, but also its mode of governance that is coupled with the recording and tracking just capacities of the modern technology and together that forms a specific way of listening. And so we see emerging not just the ability or I guess the perceived ability to listen for and to collect subjectivity, but actually we see the deep rooted belief that this collection of subjectivity has inherent value and that it can be marketed and quantified and sold for profit. And so this particular ideology is actually guided by politics of extractions that are characteristics of markets and late capitalism that are characteristic of neoliberalism. And so if there were regulations, there wouldn't be or stronger regulations, there wouldn't be private companies that reach this nation state power through controlling the means of communication. Right. So I thought it was interesting also what you said about, you know, in a way, what is the difference between neoliberal ear and machine listening? And I don't know, that's a different thing, really. I think it's It's more maybe about where the focus is. So when we say machine listening, it's implying technology. And the term ‘neoliberal’, I guess, is not necessarily implying technology, and then to say technical, it's more…

James Parker – Yes, I think it is a matter of starting point or something like that. When I think about machine listening, I mostly begin with machine listening, this is in my head, I don't know if I speak for you, Joel, because I think that machine listening is a very… It's a term that sort of echoes machine learning and is kind of part of a vernacular. I feel like people get it. So this partly just this kind of, you know, it's an easy term to convey what you're talking about. It doesn't sound too technical. But as a matter of fact, it's also literally the term that computer scientists and musician computer musicians use to describe the field. And the thing is that the immediate move that you make next or you need to move in my thinking is to say, well, when we say machine, we don't just mean that there's some kind of technical thing, you immediately need to situate it in its cultural, political, economic, social context, so that the technical is always and the machinic is always embedded and mutually constitutive of the social. So that machine listening becomes pretty close in some ways to what you're describing. Once you hash it out on the page or in conversation a little bit. But the orientation is maybe a little bit different. I have to do the work to explain, you know, well, actually, I mean, something that emerges under but also helps to produce surveillance capitalism or one of its synonyms is embedded in neoliberalism but you're sort of beginning at the other end and that's that's fine and interesting I think.

Joel Stern - We get into tricky territory sometimes sort of having to distinguish between human and non-human listening and sort of the the implication that listening by machines is automatically sort of extractive whereas listening by humans might be sort of automatically reparative, which are, you know, which are sort of tropes within sort of sound studies. James Parker - It's not just that, I mean, machine is like a very flattening word itself, even amongst machinic techniques. I mean, so, you know, machine learning is basically a hype term for in industry. And so the moment you actually dig down into the practices and the history of machine listening, it's like, well, what kind of techniques are we actually talking about? Are we talking about hidden Markov models? Are we talking about statistical led AI practices or rule based ones? Or are we even why are we focusing on AI at all? Like is recording part of the history that recording technology as such? So, I don't know, that they all you go down all of these rabbit holes.

Joel Stern - Both of our projects owe a debt to George Lewis. That's also something we have.

James Parker - Yeah, that's right. That's right. Okay, so with that kind of framework in mind, so I guess it's a study of, yeah, I mean, the proliferating technologies of the present and their, for listening and their embeddedness in, you know, some of the most extractive and exploitative political and economic systems of the time. So what, I mean, that sounds like a good segue into Spotify. Would you like to say a little bit about, you know, Spotify, how it sort of exemplifies what you're calling the neoliberal ear? I mean, however you want to begin, just in general, or with an anecdote, or whatever you like.

Audrey Amsellem - Yeah, you know, I think Spotify is a great example of the neoliberal ear in my mind, because it actually allows us to see the history of it. So some of it, I was talking that I was talking about a bit earlier, but this idea that if you sort of trace the history of it, you could see the way that music has been subjected to various forms of control in the West, right? And so this is where someone like Jacques Adagli or Catherine Bergeron and those people are important because there are these historians who are looking at the various ways in which, you know, the Holy Roman Empire in the ninth century or the Venetian state in the 15th century, they're strategically controlling the kind of music that gets circulated and who circulates it. And this keeps happening, right? And so then you have copyright law in the 18th century, and that will serve its own kind of form of control, especially the way that And I think piracy was a form of rebellion against that, right? That's how I interpreted. Obviously, there's various people interpreted differently. But the fact that Spotify is born out of piracy, and that is pretty clear, there's the same people involved. It's not a coincidence, right? So what I try to

James Parker - I didn't know that Daniel Ek was literally ran a torrent company before he sort of, you know, went clean and started, you know…

Joel Stern - He didn't go clean. He went much dirtier.

Audrey Amsellem - Yeah, I know. It actually kind of makes sense, right? When you start to see that connection, it actually makes total sense because there's a lot that Spotify is borrowing from piracy and actually a lot is borrowing from piracy surveillance, which was the response, the sort of, yeah, the response against piracy. So I guess I tried to highlight that the history of music is I think I mentioned this at some point in the disc that the history of music is sort of burst of creativity and burst of controlling response to that creativity, right? So piracy I consider to be a burst of creativity and piracy surveillance, which is a concept that was theorized by legal scholars, Sonia Katiyar, and that would be a burst of control. So it's this idea that because piracy surveillance, the basic idea is that because you have piracy and that's that's illegal and that's wrong according to the RIAA and according to some people, then it gives, it normalizes and gives a form of legitimacy to actually track people's behavior online and do it in a way that's unprecedented and do it with somehow a legal structure that supports it, even though some people would argue some, I think pretty serious legal scholars might argue that is absolutely going against the Fourth Amendment, right?

James Parker - So can I just can I just interrupt because I find this idea of piracy surveillance really interesting. I hadn't come across it before. And I just I just wondered if we could slow down and hash it out a little bit because so it seems like you're making the argument in the dissertation that piracy surveillance sort of sets the groundwork basically for forms I mean, this is sort of well before we've got, you know, Google, sort of, you know, in the Shoshana Zuboff story, we're sort of pre surveillance capitalism proper, right? You know, but, you know, early days of the sort of publicly available internet, enormous explosion in music piracy, and that these companies the music the big the big record labels start to surveil. Download basically and so that's that's piracy so it's perhaps you can give a more nuances definition but then that technique of surveilling music consumption habits originally in the form of piracy becomes normalized i think is the argument and then becomes foundational not just to the music industry that goes on to be. No spotify but this technique. sort of, you know, goes, goes wild. It's not that, you know, cookies and other things are part of a similar story. But, you know, I think in the strongest version that you put it in the argument, you say that it's like some foundational to surveillance capitalism. I haven't heard that argument before. I thought it was really interesting. So I just wondered if you could hash it out a little bit.

Audrey Amsellem - Yeah, I think and, you know, I do believe it's foundational to surveillance capitalism. I do believe it's this particular moment. But, you know, again, you could go back in history and trace similar moments, but just focusing on piracy surveillance for a minute. Yeah, absolutely. I think this idea that to just be able to justify something that I don't think without piracy would have been justifiable. It might have had to do also with the fact that people were not, most people were not too tech savvy at the time and didn't really also know maybe that they were being surveilled, but there was a whole discourse that you may remember that was saying, you know, how wrong piracy is and how it's ruining the life of musicians and how musicians can't make a living anymore. Not just musicians, but it was also about movies and various other forms of I think pre Cambridge Analytica, I would often hear people when we would talk to them about surveillance and privacy in general. And they would say, you know, oh, I don't really mind surveillance. I don't really mind that the government knows what I'm doing on Facebook because I'm not doing anything wrong. And that whole way of thinking about surveillance, which I think is very actually quite harmful, but that was pervasive for a long time. And I think it's changing now, but it was pervasive for a long time. It was this idea that there was somehow the justification, things that were justified, that it was justified for the government to enter into your private computer and your private network and see what you were doing on your computer. And part of that has to do, I think, with the way they legitimized that discourse. And part of it had to do with the fact that people weren't really, maybe really thinking about the implications of that. But to me, the implications are very, it's very direct to surveillance capitalism, because all of a sudden, it's just already, it's often how it works actually with technology and with the maybe with the law too, is once something has happened once, then it becomes okay, right. And so this idea that this was already a system that was in place, then it became just acceptable and accepted for government entities and then private corporations as well, to just get your information in that way that just became the norm in a way. James Parker - So there's a whole series of like major lawsuits. At first like individuals, you know, sent all of these letters and they, you know, there was that phenomenon of the record industry just going after, you know, some kid. And then that sort of that a lot of that energy got directed. I mean, I don't know how simultaneous they were. I can't I don't recall, but to, you know, the big platforms like Napster and LimeWire and so on. And then sort of subsequently after Napster shut down and things we get, you know, it's pretty quick that we get the data, the data tracking industry kind of pairing up with the emergent streaming platforms. I mean, they sort of emerge simultaneously. It feels like in your story, Echo Nest, which is a company we have a little bit of familiarity with, sort of does a lot of the work for you because so Spotify, you know, sets up this streaming platform. the promises that you get all of the access, but none of the illegality, lots and lots and lots of venture capital, more, more money, money, money, money. And at the same time, or very, very quickly, they start to become a effectively a data analytics company more than a music company. And echo nest is the sort of, I don't know if it's I don't I can't I again, I forget the timeline. It's not my thesis, but echo nest is is a company they buy up in order to help help drive that turn. Is that right?

Audrey Amsellem - So, yeah, so the Echonest is a music analysis tool, right? So what it's going to do is it's a tool that analyzes the song for its musical events, but also for its cultural cache. It also goes and looks at the way that people talk about the songs in a blog or something like that. So that technology is used to make playlists. various playlists and recommended targeted playlist as well as more general playlists. Some of the plays have Spotify or human made and some are made through the algorithms. And so it's very much, you know, part of this like quantifiable statistical part of neoliberalism, obviously. That in itself has been theorized, the equinox has been theorized by several, many musicologists actually I've talked about it. Eric Drott has a fantastical article about it. Robert Prei is a great, I think his dissertation actually was specifically on this. Nick Seaver also talks about actually algorithmic recommendation as traps, as entrapments, which I think is a very interesting way to thinking about those as well.

So the idea of being able to take a song and sort of decompose it in some sense, analyze its various events and then attribute a sort of quantifiable element, quantifiable sort of ranking to it, right, in order to then associate it with a person who recommended to a person is a particular form of musical analysis, which is this idea that you can sort of infer the preferences of somebody based on what they've listened to in the past. And but that information then gets sold to advertisers. So this is where the surveillance bit of the aconess comes in is that that information gets sold to various kinds of advertisers who are then using this information to infer things about us. And so there was some journalists who were actually looking into this and talking about how musical taste has been used to infer political allegiance. So like some apparently if you listen to Pink Floyd, you're more likely to be Republican or something like that, right? It's this idea that somehow you could infer people's politics based on what they're listening to. So that seems, you know, on the first degree, kind of not very harmful.

But then when you look at this, you know, post Cambridge Analytica and you look at the Cambridge Analytica relationship to this, it does seem pretty concerning and also, I mean, idiotic as well. It's obviously nonsense. But it's this idea that somehow music has this particular power. that it can somehow our musical taste and our musical listening practice can tell they can tell us things about ourselves that we don't even know right that is somehow a portal into the unconscious and it's a very seductive idea that idea it's made to be true it may not be true but it's very seductive and so that's very easy to sell to an advertiser.

James Parker – I mean, it’s a staple of youth culture like since… I mean it's about which band T-shirt you're wearing. I mean, music as a tool of like self-identification, like it's not just foisted on us by, you know, surveillance capitalists. It's a sort of a detourning or capture of something that people sort of feel quite strongly.

Joel Stern - Yeah, I mean, even going right back to the sort of teenagers buying records in the fifties and it's both an assertion of an authentic youth culture, but also the construction of a new consumer sort of demographic of the teenager with certain consuming habits and sort of providing that data. But just hearing you sort of talk about this, you know, mobilisation of Spotify, which the majority of people in the world would think of as a music delivery platform, but which in fact has its own sort of listening and practices of extraction… But then the listening preferences of people on Spotify are so shaped by the platform itself, you know, increasingly so, that the idea that it kind of provides access to some sort of unconscious sort of internal kind of information about that person, it's just, you know, that that sort of listening subject is already fully entrapped in the kind of limited range of preferences available to the platform and its recommendation algorithm.

James Parker - So it's like a paradox that they can't, they have to leave it unresolved because they want to sell to the listener perfect recommendation according to their authentic tastes. And meanwhile, they're selling to the advertising industry, the ability to sculpt and entirely determine those tastes. And they're both essential to the marketing of the product.

Can I just – you probably have some responses to that - but one of the things that I find interesting about Echo Nest is that they do audio analysis. So, Echo Nest comes out of, partly comes out of music information retrieval and it's one of the first sort of like companies to really commercialize… I mean, music information retrieval in general like explodes right after the Napster take down and music copyright protection becomes, you know, we see audio fingerprinting explode as a subfield of music information retrieval at exactly the same time. So music information retrieval is one of the few examples where there does seem to be a sort of listening, a sort of true, I'm using air quotes like ‘listening’ going on. So there's an attempt to extract something from the audio sort of quote unquote itself. And then that sort of is put alongside or together with, you know, analysis of tags and all of these things that are sort of getting at the fact metadata. Yeah, various sort of user produced metadata.

So I've always been intrigued by the fact that like, you never really know, like, I've found it very difficult to disentangle how powerful the quote unquote music analysis that sort of audio analysis actually is for companies like this? I mean, I don't know if you know the answer, but I found it interesting because I kind of weirdly - and it seems like you don't have this hang up - like searching for examples and it's partly this framework of machine listening. I'm like, well, that's an example of ‘listening’. I want to sort of follow that example. And for you, it's sort of, you know, what's the difference really? They're both extractive processes.

But I'm fascinated by the sort of the rhetorical appeal to be able to analyze the music itself, quote unquote. And the fact that we actually have no idea. It seems like often what's driving algorithmic sort of playlist production is simply what other people have listened to. It's not even like tagging of data. It's like, well, you know, another 30 something year old male, you know, listen to this kind of song. And so here's another song they listen to. It's sort of a bit dumber. A lot of the kind of actually used techniques are dumber than then they seem or dumber than the sales pitch.

Audrey Amsellem - Yeah, it's interesting to think of when is a machine actually listening and when it is actually just transcribing, when is it actually just transcribing into text and then speaking. There always seems to be that medium of text that has to be in between. But the data from the Echo Nest can actually be used even beyond what is maybe particularly interesting for composers, I think, but maybe it's also scary, I'm not sure. But it's how actually this data is sold to record labels. So data from the econ is sold to record labels in which they will give, you know, their their econ as musical, like music analysis of a hit song, let's say, to the record label. And then they can compile this information and say, well, in 2022, the top 10 songs all had a BPM of 120 and all had minor chords and whatever it is that they come up with. And then that actually allows potentially a record label to say, well, now we can plug that information to actually make music that will fit the taste. So it's sort of dystopic in every way you look at it. It keeps feeding itself.

And now it's not just the curatorial platform that actually creates a musical taste, as Joel was just mentioning, was saying that the musical the habits of people are actually shaped by the way the platform is actually designed and how the user is sort of forced to interact with them because of the way the platform is designed. There's actually the music itself, right? Because the more you're going to listen to a certain song and that becomes a hit song. And then, you know, then we, this formula is supposedly going to sort of emerge. So yes, it's a never ending process, I suppose.

Joel Stern - Audrey, one of the artists that we've been working with quite closely on machine listening composer, Tom Smith, who's very interested in sort of cultures of automation. One of the works he made for our program is called Top 10. And what he's done is he's taken the top 10 tracks on Spotify from each country and created a single track that averages out all of the qualities of those top 10. So pitch, tempo, rhythm, frequencies to create a kind of generic track for each country. And then he performs these DJ sets where he will sort of play these generic pieces and they sort of simultaneously, you know, quite revealing of what happens when you average out musical material. You know, it becomes increasingly both generic and sort of unlistenable, but also I suppose the absurdity of the kind of logical endpoint of some of these practices. But yeah, might be.

I wonder James, did you put the link in the chat or if not, I'll do that. But that, yeah, that's the work I was immediately thinking of. Yeah, and it's sort of, it's just a really lovely sort of counterpoint to the discussion we've having. So anybody listening, I'd encourage them to go and look it up just because, yeah, of the show, not tell, you know, we've been talking a lot about the techniques and just as an artist who's working with the exact sort of similar concerns, but sort of in the medium of music. It just is. Yeah, I'm sure there's many others.

James Parker - I wonder if this is a good point to segue on to smart speakers and voice assistants. Although I was also struck by there's another little anecdote in your it's not really an anecdote, but yeah, an anecdote in your thesis about music for mums. Because you talk a little bit about the different ways in which we're sort of bracketed and, you know, so techies, like it's just interesting. You give the example of techies and moms, like what are the categories that are used to segment us? Like how does that you know, it's not just men or like people who listen to Pink Floyd, but the idea that there's something called a techie and then obviously a mom. But in the thesis, you give the example of the this category of the mom who's constantly recommended Disney songs. And despite all of this rhetoric of being sort of incredibly refined and smart, that somehow Spotify and these other streaming companies can't distinguish between a parent or a mother's own listening preferences and the ones that they are doing as I do on the constant demands of their children. Can I listen to, you know, I don't know, whatever, some Disney song. And it's just, I don't know, just yeah, I guess both of my point about the echo nest and this anecdote just point towards the dumbness and the extent of the hype. And yeah, the way that like hype and misdirection are sort of really deeply bound up with a logic of neoliberalism. Um, you know, that's something that's not at all brought out if you're thinking in terms of machine listening, but once you start to think in terms of the neoliberal ear, it becomes much easier to think about just the real extent of marketing bullshit, uh, as like a fundamental feature of the listening practices and the systems of power and control that are going on.

Audrey Amsellem - Yeah, yeah, that was also sort of funny to me when I saw that. And it's not like I'm working with some raw data and I was able to extract this. This is part of their marketing discourse in which there's like, well, moms are more likely to listen to Disney songs. Like, are they? Are you sure? Not only have you come up with that, but then you also advertise that as a thing to sell to advertisers as this very exceptionally smart and sophisticated data that is obviously none of that, right?

James Parker - And you requested your own data from Spotify. What was that experience like?

Audrey Amsellem - It was very silly. So basically I requested my data when the GDPR came into effect because Spotify is a Swedish company, right? So they are bounded by the GDPR. And the GDPR part of it is that there's a duty of transparency. So I went to that contact form on the website I asked for my data, I'm talking to a bot probably, right? And they sent me back this tiny file which has very basic information. Basically the information I already knew they had because it's information that's visible, like my playlists, my billing information, what is already obvious that they have. So I asked the bot again and I said, no, you need to send me the real stuff this time. And then I get a much larger file that was essentially incomprehensible unless you know how to, you're a data scientist basically. So to me that's not, it's not just not actually abiding to the GDPR because it's not actual transparency, right? It's the idea that the GDPR was to empower people with knowledge, right, about their digital lives and their habits. And this is not only not doing that, it's actually doing the opposite. It's actually precisely disempowerment because when I looked at this, you know, hundreds of pages of numbers and letters in random order to me, you know, you look at it and you feel profoundly disempowered by both sort of sheer amount of information, but also your complete lack of ability to understand it without a computer scientists or an engineer department and transcribe this data in a way that makes sense to me. But yeah, hopefully I'll be able to do that at some point. But on my own, I was not able to find that out.

James Parker - All right, so now let's segue into smart speakers then. And you know, maybe as a starting point, how we should think about the relationship between streaming platforms like Spotify and assistance. I mean, you know, I have my own like slightly, you know, underdeveloped ideas about this, like, you know, in the same way that I think you quote Lessig at one point saying that illegal downloading is the kind of crack cocaine of the internet's growth. I sort of feel like without, without music streaming, there's no, there's no smart speaker market. Like I just, I mean, whatever you can ask for the weather. But really what a smart speaker is, I mean, at least at first, maybe they'll become more deeply integrated into whatever, but at least at first, it's a slightly crappy kitchen radio for just listening to some music, not not in high fidelity or anything, just, you know, what while you're doing the dishes or something like that, you know, and if you can't stream music on it, I just don't see how you're going to sell any.

So they seem like really closely related maybe. But yeah, I mean, that's just one possible way of thinking them together. I mean, how do you understand the relationship between an entity like Spotify and its listening practices and something like Alexa? Yeah, maybe we can go from there to ways in which they diverge as well.

Audrey Amsellem - Yeah, no, I think that's exactly right, because it has become a pattern that this tech companies are basically using music as a way to entice people into buying various device. And you can see even the history of the iPhone like this, you know, before the iPhone, there was the iPod. And, you know, this idea, the iPod was so revolutionary. I mean, this idea you could just like carry in your pocket, 10,000 songs that you know, probably pirated. And then you have the iPhone. And actually remember when the iPhone came out, it was maybe I was in high school, so it was maybe I don't know, 2008 or something at least when it came out in France. And I remember seeing the commercial for it on television and I thought, it's an iPod, that's a phone, that's so dumb. Who would want to buy this? This will never work. And so obviously, you know, I'm not a techy for this reason. To be honest, that's all futurists make, you know, catastrophic mistakes. So I say no different from Elon Musk. Sure. So yeah, so this idea of these companies are using first music to entice people. And so once, you know, the iPod was in everyone's pocket, it was much easier to sell the iPhone. I think this is absolutely what you said also with Spotify and with streaming services in general, actually. And a smart speaker, it says that people were buying them to listen first as a speaker, and then as a speaker that you could command with your voice. So you could say, Alexa, play this song. And so Spotify is obviously sort of embedded within Alexa. in that context. So it is this pattern.

And as you mentioned, the Lessig quote, I really like that quote, this idea of music has being so, being always perceived as harmless in a way, always says this thing that can only bring positive things, right? So in that sense, what could possibly be the harm of having a speaker? Because that this main point of it is to have music in your home and what is more beautiful than that, right? And actually, because sort of taking advantage in a way of the pleasure of music by embedding it with invasive technology, it seems to be a common pattern. And taking that further.

James Parker - Sorry to interrupt, but just this sort of association of listening with care is a kind of further extrapolation of that or even somehow sort of of the same order. Do you mean that the the device is caring for you by sort of attending to your atmospheric needs or what have you?

Audrey Amsellem - Yeah, absolutely. I mean, you know, in the way that comes through, for instance, in the in the smart wife or in that kind of the continuity with sort of domestic labor and servitude. But yeah. Yeah, no, absolutely. I think it's definitely part of this idea of listening with care and then in our point of view, a perversion of that care, right? There's a perversion of what music is in some sense. There is a perversion of what the female voice is supposed to be. It's not supposed to be this sort of submissive servant. So yeah, it's always sort of playing with these poles and it's quite dystopic actually in some way.

James Parker – So… to be really crude… streaming music streaming is a kind of Trojan horse for getting a listening device in your home in a certain kind of way. And then once it's in the home, the key difference, although Spotify also is partly involved in this game, I think you mentioned at one point, is that suddenly the voice becomes… I mean, yeah, when I think about my own encounter with speech recognition technologies, it's basically like I remember kind of the end of the '90s. I think I had to -- there was some call routing that was done with speech recognition. It was pretty crap and frustrating. And then a few friends, like, who had computers, like, had, you know, maybe their dad or something had bought an automatic description thing, but it was all pretty rubbish. And, you know, I mean, the history of speech recognition is a very long history. It's as long as there's been computing, there's been people trying to do speech recognition. But it's sort of undeniable that like the voice assistant is the moment of it sort of mainstreaming. And although Siri is, you know, often it's always talked about as the first one, uh, Really, it's the smart speaker, I think, that kind of takes it, you know, takes it really, truly big. And so it's the, you know, music is the route into the home. But when what happens when the smart speaker enters the home is that the voice and speech becomes an interface really for the first time. And then you have a lot of analysis of the ways in which the voice and and speech is exploited and sort of, I mean, yeah, you also talk about a lot of other dimensions of the smart speaker, its gender dimensions, the labor practices, the diverse sort of forms of labor that are kind of hidden in the production of this tiny little object. Yeah, so, but yeah, where would you, what should we talk about? Should we talk about the labor stuff or the, you know, capture and instruction?

Joel Stern - Let's put that voice as an interface and as a commodity that is sort of, you know, becomes viable via the smart speaker.

Audrey Amsellem - Yeah, so there's the voice of Alexa or the smart speaker, whatever, you know, that voice assistant is. In my case, it's Alexa, but we could talk about other ones. And the voice, obviously, of the user. And I think Voice recognition is among the scariest things that are happening today. We've, in the past few years, there has been a really fascinating amount of scholarship on facial recognitions and the danger of facial recognitions. And I definitely see voice recognition as a continuation of that. I am quite concerned with the technology that's being developed both in, I spent quite a bit of time talking about patents in the dissertation. There is also sort of already existing technology, obviously I'm talking about Alexa and Amazon Halo is another example that I give and this idea that voice it's completely sort of normalized for these devices to record your voice and then create voice profiles and also determine your identity based on the sound of your voice and so gender is an is an obvious issue here and not just the gender of the voice of Alexa, right, but actually gender recognition or misrecognition often through the voice of this idea that somehow, generally speaking, you know, male voices are going to be lower than female voices, which is completely going against all the ways that we're actually trying to change these things, right, that we're actually trying to have people define their subjectivity. And I think anyone in sort of voice studies or sound studies would be actually quite critical of this. And I cite some some people like Nina Hindsine who did like incredible work on this.

James Parker – This is similar to the point I was making about echo nest in some ways, but one of the things about a smart speaker is that it's a portal to a form of listening that is always changing and which you never really know. You never know what kind of listening is doing when you sign up to the, terms and conditions, you know, it's not like you get an update. You don't get an email that says, now we listen to, your emotion now we and in these ways and we do it exactly like this. Now we're listening to whether you sound sick now we're listening. You know you never really know and that's part of the whole point i think you say at one point in the thesis that like you know these are lost leaders like they don't make any money off smart speakers the only reason you're gonna sell a smart speaker is because the initial whatever like 20 alexa skills are now 100 000. The forms of analysis the forms of products are offered on this platform or this eco this sort of emerging ecosystem is sort of massively more than just being a, you know, a music delivery device in your home. So one of the things I find, you know, I always find hard with these things, is that when I read a patent that says Amazon's gonna offer, you know, that they're imagining being able to sell you chicken soup because you sound sort of sick. When how much of this stuff is actually happening? I haven't done that analysis myself and I know and again I don't really understand. I mean I presume that it is or very soon will be. But do you have a sense of like what are the actual concrete forms of attention to the voice that are in fact being used right now and which are more speculative and future oriented?

Audrey Amsellem - No, and actually that is part of the, that's actually part of the framework I develop around the neoliberal era is that it's always opaque and always ambiguous and that you never really quite know. And so legally that would be complicated for them to implement this. That's actually, I mean, it's obviously something that should be regulated, but it's not quite the same as doing facial recognition. in the US at least, I'm talking about the legal context that I am familiar with. But it's legal in the US to have a CCTV. It is not legal in the US, and on the street, it is not legal to have a microphone on the street. It's completely illegal. So in that sense, there is a legal sort of gray area here, which is a good thing. But we don't actually exactly know what the technology, how the data that how their data is collected is being used, we don't really know because there is no system of sort of checks and balances. You have a privacy policy, obviously, and it's one you have to agree to. And if you read it attentively, you're probably going to find out stuff that you didn't know were happening when you were using the technology. But the reality is that there is nothing that's obligating these companies to be fully transparent in their privacy policy. And also, these are policies are written by very smart lawyers who have a particular way of wording things that may, you know, sometimes I talk about this idea of saying collecting information such as and this idea of such as well, but you should actually cite everything because such as is not satisfactory here, right? So we don't really know what's going on. It is, I know even if you requested it, you presumably just get sent a giant log that you need to hire a computer scientist to untangle. Yeah.

Joel Stern - Audrey, I really loved this chapter. It was amazing to see the analysis of the advertising and the particular kind of images of domestic normativity that they echo. I wanted to ask you about the section on speech emotion recognition. because I loved this, I mean, I was horrified, but also loved the fact that the Echo has a frustration detection tool. And then you sort of go on to talk about the halo as a form of tone policing. Could you say something about this sort of relationship between these devices and the sort sort of extraction of information about people's emotional states and then how that information is then sort of exploited and used.

James Parker - Yeah, I think it's also part of this normalisation of ubiquitous listening, this normalisation of surveillance essentially, is to sell a frustration detection tool or even Halo as a sort of independent device whose goal is mainly to monitor you and then somehow and I don't know how successful that device is. I haven't heard too much, you know, I've never seen people, anybody with this. I don't know how successful it actually is. But this idea that somehow this would be desirable, right? And so they're selling this to you as something that is desirable. You should know the tone of your voice, you should, you should have Alexa be able to detect when you're frustrated and to have this information. And that is sold to you as something that you need, right? But it is, yeah, absolutely. you said, you know, it is scary. It is scary because also, as most of this technology that actually we've discussed throughout this, most of it is doesn't work right. And most of it is incompetent actually. Is the device scary? Or is it the fact that people desire it? Even scary? Yeah, it's both. Yeah.

Do you know Chris Gilliard’s work? He's written about this idea of luxury surveillance and he's trying to get at exact. So the way that you situate the history of all of this especially with recording in the history of, you know, US but not just US race relations and the relate, you know, obviously, like, the recording industry. But just more generally, like surveillance is very closely tied up with capitalism. And, you know, obviously, Ruha Benjamin, but various people have sort of made a theorize list. Anyway, Chris Gilliard has this idea of luxury surveillance as a sort of white, but not just, white desire.

And so, you know he shows how that sort of surveillance as domination becomes a kind of way for testing out various different things or rolling out various forms of surveillance that are at the same time or just subsequently become objects of consumer fetishization. And then the racial dynamics of those play out really differently. Yeah, so i think that's the desire of surveillance has a luxury good

Joel Stern – Do you mean in the context of the high end smart home?

James Parker - Right exactly. But the halo device is surely the perfect example because it is literally a prison-like home-surveillance bracelet. An ankle monitor effectively. But now it's being sold by Amazon as a as a luxury product you know. So yeah.

Audrey Amsellem - No I think it's very interesting that you're tying this to whiteness. There's something I mean there's so much fascinating work that's been done that's being done also I'm thinking of Thao Phan’s work on and on Alexa as well and the way she's sort of taking the sort of paradigm of the way we've been talking about gender and Alexa and sort of reversing it on its own head in a way and her racial analysis of the politics of Alexa and the process of listening is fascinating.

But yeah, I think there's a small part in the dissertation where I talk about how the field of civilian studies in my mind in the way I'm seeing it, the way I interpret it, it emerges as a field because surveillance, it's not because surveillance is a new phenomenon, right? It emerges as a field because all of a sudden it's concerning everyone… including people who are not traditionally subjected to surveillance. Like white people. And people with means, and people in power. It becomes this serious field of inquiry only then. But the reality is it's been the quotidian for black people obviously in America, also in the colonies… Jewish people. It’s been the quotidian for lots of populations. And so there is also that question. And I don't know how much there is actually thinking through what is the race of people who buy these devices. And I don't know if this is even information we necessarily want, but I would be interested to see if this is not a concern because white people didn't have historically to worry much about surveillance. And so they're buying these halos and these devices. and other people might be more skeptical of them and justifiably so.

James Parker - I think I've read that about Amazon ring that it's brought up mostly in white neighborhoods and mobilizing a politics of like racialized fear around security and home security and so on. And especially also because part of the point of Amazon ring was to stop theft of Amazon delivered packages and you know, and then there's a racialized dimension to the delivery people for the packages. And so, you know, often you just got a thing targeted at a brown person delivering a box, you know, let alone the neighborhood. So, I mean, maybe this is a way of segueing out from the domestic sphere, you know, into public space, you know, because the ring is kind of on the threshold of the house. Now, I think Amazon Ring is or can be hooked up to Amazon's guard, I think it might be called, or maybe that's the Google one. But yeah, there's like a kind of a voice activated and microphone assisted kind of dimension even to home security cameras and so on. So you can see the logic sort of spilling out into the home. I haven't encountered a voice user interface in public space space yet. I'm sure it's coming. I'm sure that that's what people are imagining when you know, the Toronto, the Google Toronto, you know, smart city and whatever. But you talk about LinkNYC. And that wasn't an example that I knew anything about until I read your thesis. Yeah, so could you just sort of introduce it? I'm sure it's less familiar to people than then Spotify and Alexa and you know, what it is. and how you think about it.

Audrey Amsellem - So LinkNYC is a communication hub. It's basically these massive kiosks that are scattered around New York City, and they provide an array of services. Free Wi-Fi is a big appeal, but also access to city services. And originally, when it was launched, in 2016, it also had and it still actually has a tablet, but it also had access to a browser on the tablet such that people could go and browse the internet. There's also a microphone because you can make phone calls and there are three cameras that are on top of the screen. So looking on the street and a one camera that's on top of the tablet. So the cameras that are facing the street, they were originally, I think, equipped with sensors. This was removed unclear exactly what happened there. But the cameras are set to only film according to employees of Citybridge, which is a larger company that LinkNYC is part of, to record if there is somebody trying to vandalize the kiosks.

Obviously, it could, even if this is a case now, that could you know, change over time and this could be important data for companies to have. Where LinkNYC was, it became complicated and became an interesting object for me and for a lot of New Yorkers is that it was financed by a Google company called Sidewalk Lab. And you mentioned Toronto and that's part of that same, that's also Sidewalk Labs in Toronto. And that was sort of the first sort So big question that was happening saying this is all funded by Alphabet, which is Google. And the second thing that arose with LinkNYC is the removal of that web browsing option because a lot of homeless people were actually using the web browsing to just browse the web in general, but also to play videos on YouTube, to play music on YouTube. And that was loud. And so a lot of-- local residents had complained apparently about how basically homeless people gathering around the kiosks and that the kiosks were attracting homeless people. And so some residents were unhappy about that. And as a result, they actually removed the web browsing. So that was sort of the departure for the larger exploration of LinkNYC.

James Parker - Can i ask a couple of follow up questions before we get into the listening specific dimensions. What this is a gonna sound silly is very basic. What did they think. Yeah this was for because obviously i mean twenty sixteen is. It's only six years ago but sort of quite a long time ago in the history of smart technologies and smartphones and so on it seems. completely obvious to me that I would never have any reason to use a LinkNYC kiosk. I mean, I'm going to always have my smartphone with me probably. I can't think why I would ever need free Wi-Fi because I've got 4G and lots of people now have 5G and everything. So, when I think of LinkNYC, I think, well, they're embedding cameras. Everywhere. And microphones. And the only people who are going to use this are people who are sort of on the other side of a tech, you know, dividing line. And so part of me is like, well, is the point of this to increase accessibility to smart, you know, to the web and stuff is kind of like a library service or something, you know, and part of me is like, well, that's surely the purpose because anybody who's rich and never going to use it. So obviously, homeless people and people who don't have endless free data and so on, poor people basically, and that's going to be racialized and so on and so on, they're the users for this. What else were they imagining? Surely that's part of the sales pitch. So how did that-- what was-- but that's like, was Google running a kind of-- we're empowering the poor with digital technologies line or what was the what was the story being used to sell Link NYC and why were they surprised when when homeless people started to use it? It seems obvious.

Audrey Amsellem - Yes, absolutely. And so this was the other that was completely part of discourse, this idea that this was a device that was going to bridge the digital divide. And they kept using that term over and over again. So it's definitely part of the marketing discourse. And then when they cut off the web browsing, they sort of released that they had a series of tweets and then they had a a press release as well, and they said, you know, it's to make it more accessible because if people are just taking over the tablet, then it's less accessible. So this is sort of weird irony here. But this is where also I think Shannon Mattern's work is important because she talks about how these tech companies don't look at decades of good practice in public good spaces, right? Like how libraries have dealt with this problem. They've been dealing with this problem for a long time. But what tech companies, they don't look at what libraries do, they look at other tech companies because public good may not actually be the main goal here. So you put hundreds of giant casts with speaker and a browser in the city, and it doesn't really occur to you that this will significantly alter the daily life. And so all of a sudden people are just like misusing the technology, right? But were they really over there simply using it?

So their response becomes very punitive actually, becomes very exclusionary, becomes quite harsh and sad. I mean, that point about like, where's the knowledge in relation to the digital divide in the public use of technology? And it's like, it's in libraries, duh, is like, it's just such a great point. It's just, it sort of seems so obvious now you mentioned it. And, and I'm not remotely surprised that tech companies didn't tap libraries on the shoulder. But it's like, yeah, it's just such a perfect example of the kind of myopia of so many tech companies who are claiming to solve every imaginable problem. And, you know, you saw the same thing with like, you know, COVID and whatever. It's like, we literally know how to do this. We've got public health. I mean, you know, there's so many public health measures that are known to work. Meanwhile, tech companies are selling completely untested bullshit because because it's a tech hack.

James Parker So yeah, so interesting. How do you think about LinkNYC? What is the relationship between the technology and its politics?

Audrey Amsellem - One of the aspects of it is that it presents itself as a public good. As I just mentioned, although it may have some positive application, it is not the primary purpose for these devices. because the primary purpose is data gathering. So throughout that chapter, I actually used frameworks of noise and listening to sort of structure my metaphorical understanding of these devices. And I talk about, for example, listening as silencing. So this idea of sort of exploring the tension exploring this tension between public good and private entities by basically arguing that the Neil Beale ear has a starvation for data, which actually in the case of LinkNYC, we say at least actually isolates and excludes people. But I think LinkNYC, we see and Spotify and Alexa, they're very much tied in because of that sort of constant starvation, that constant thirst for data. And this idea that, that to normalizing the ubiquity of listening and recording and the various nuance between all of these. What kind of data does LinkNYC want specifically? What are they, what's the untapped data resource are they harvesting? So it depends who you ask, right? So the first more obvious thing is that LinkNYC is equipped and I actually did mention this, but is equipped with two huge advertising screens on each side. So when you're walking the street of New York, if you haven't been in a few years or haven't been, if you're walking the street in New York, you're going to see these like huge light sources with these huge advertising screens. So you have advertising here. One of the potential uses is to be able to target advertising So the free Wi-Fi component, you can still, even though you don't have the web browsing, you still connect with your phone, right? So there is this idea of potential, there's potential of this data to be connected. So if you know, if you're using Google Maps, let's say, and you're on the LinkNYC network, then potentially you could have targeted advertising, sort of the image appears in front of you as you're walking, right? So that could be—

James Parker - So people who watch Minority Report and didn't realize that it was a dystopia. basically.

Audrey Amsellem - Yeah, and so this as far as I know is not happening now, but it was, so there was, we know that they developed that technology. So this was actually this undergrad student who discovered this by going through GitHub and found some of their code. As far as we know, this is not technology that's implemented on the chaos right now, but it very easily could be. So yeah, this idea of, when we talked about Alexa earlier and said you're gathering information on domestic space, Linkin.Wisee is just gathering information in a different space, right? It's a Google project. Google has been gathering information on our online behavior. You also have Google Home that gathers information on your domestic space and they can now also gather information that's in the public space. So I think it's part, it's tied in that it's just part of this larger thirst and starvation for having data in pretty much every facet of our lives.

James Parker - Would I be right in saying that when you turn to LinkNYC, you're beginning to move towards thinking about listening as data collection. So a kind of slightly more metaphorical, I mean, I think Robin James also does this in her work on acousmatic listening. So is that what's going on? We're no longer so much concerned with microphones but of the kind of signal detection through a field of noise as sort of the dominant way of thinking about data.

Joel Stern - She calls it acousmatic dataveillance. I remember being really struck by that essay she wrote, I think it was on the Sounding Out. Yeah, it must have been 2014 or 2015 and sort of trying to connect it with Pierre Schaeffer's idea of acousmatic sound. It was always a question of what kind of listening are we describing here? Is the metaphor of an ear still relevant when we are thinking mostly in terms of data rather than audio or sound itself?

Audrey Amsellem - It is not a physical ear. It is not an ear that is actually able to process sound the way that a human can process In that sense, it's listening in the broad sense of the word. And so on the one hand, in some of my work, listening is very much literal and very much sort of an action. And in other ways, it's very much metaphorical. But it's just trying to highlight a general system in which sound can function as a modality of power, essentially. But yeah, I mean, I remember that blog actually, I think I cite that a couple of times. Acoustmatic dataveillance. And it's yeah, it's super interesting. And I I'm very curious to see what more comes out because well, she's she's the I don't know if she would say she's a sound studies person, I guess probably. I mean, she's a philosopher who writes on popular music mostly. But I'm very curious to have more people that have that interest in that background also. So think about the various ways that listening can be applied to civil rights capitalism. In her case, I think she was working…talking about the NSA so it's slightly different but just think about surveillance and listening as a framework and I'm sort of looking forward to having more of this work out there to enrich our understanding.

James Parker - It's funny that you mentioned the NSA because I was wondering like, this is not in any way a criticism because it's an enormous project already, but where, yeah, where this is not a story for you about government surveillance. And that's partly because, you know, you're trying to tell a story about neoliberalism, although, of course, you could probably make the argument that neoliberalism and the NSA go together pretty well and so on. But yeah, just as a, I mean, I want to talk a little bit about where you end in the thesis and this idea of trustworthy listening. But you must have you must have confronted the question, how am I going to deal with the state surveillance? And I obviously, it's not a centerpiece of this work. And I'm also confronting that myself. So I'm basically asking you to help with my own work! But like, I mean, yeah, how did you think that through?

Audrey Amsellem - So yeah, I actually started this project with LinkNYC first and that initial my initial master's thesis actually had a lot more about um Patriot Act and about movement and particularly about the history of New York City and movement and control of movement within the city and control of behavior and it had actually a bit more of that angle. When the dissertation had to be formed I had to you know I had to impose myself limits which I didn't impose too many but that was the one I had to impose just because that literature is so vast and I was not going to be able to really tap into that and I'm already very interdisciplinary. But yes, I don't have a solution at all for you, but I absolutely agree with what you said that it is very much could very well fit into this because it very much ties with neoliberalism, especially when we consider that a lot of the PRISM program relied on data from private corporations. It relied on data from Google, on data from Facebook. So that there is no sort of intervention to be made. This is a fact, you know, it's a fact that there is an interplay between the two.

James Parker - Well, you could go back further. I mean, you know, all of the speech recognition stuff and a lot of the music, early computer music stuff was directly funded by DARPA or it's kind of, I mean, a lot was happening in big, you know, very government backed kind companies like Bell and IBM, but a lot of it's happening straightforwardly through military funding, including the work on music, like the first work on music transcription, I think is, you know, DARPA funded. And so, you know, there is no feel, I mean, it's just such an obvious and bordering on the point of dumb point to say, you know, AI is like funded by the military, like, but it's sort of it's just so clearly true. I mean, just look at everybody's PhD thesis when you read like, it's like, DARPA funded, DARPA funded, DARPA funded. And so, you know, there's a sense in which the commercial applications all come out of government/military funding. The whole paradigm. So, I don’t know where that gets you really. Maybe this is where we can turn to the possibility of hope or reform… You do that thing in the thesis where you say Oh god, that's a bit grim. What now? And then you suggest this term trustworthy listening as a kind of antidote…

Joel Stern - But even before you get there, there is a sort of section on the detournement of Link NYC by hackers and artists who kind of find a way to creatively sort of misuse technology and then, you know, in doing so open up some kind of other possible ways of thinking about these infrastructures in more emancipatory ways. But could you just, you know, say a little bit about that work? I think it was Mark Thomas, maybe, who, you know, made these kiosks play, you know, Mr. Softy ice cream music.

Audrey Amsellem - Yeah. Yeah, yeah. Yeah. So I wish it was similar from the NSA question is I wish I had a bit more space to talk more about this. So with LinkNYC, he just seemed very important that I do. But there's obviously other examples with Alexa, and I'm sure you're familiar with with many of them. But yes. So there's been all kinds of like artistic vandalism and various forms of hacking and hacktivism in a sense. Although this particular person you're talking about, he didn't think of himself as a hacktivist. So what he basically did is that he recorded a distorted version of the Mr. Softy, which is this ice cream truck song, and a sort of distorted, uncanny version of it that was particularly creepy. He created a delay and when he would launch it on the Kiosk. And so, and he called this minute between the moment he programs it and the moment and simply the magic minute, right? Because if it's playing as he's there, it's not as uncanny, right? Because it's not the device playing on its own. And so it starts, you know, having this, so obviously people were starts resonating the sound. So obviously people were around, you know, on their smartphones are like taking videos, posting it on Twitter. And so it becomes a viral in that way. And I think it was perfect.

It was really perfect conclusion for the chapter for me, because it was this moment in which actually it becomes a sonic object again, because when it was used by people, including homeless people, but not just, it was this like sonic musical moment. This, this captain was this moment of maybe a different identity for the city. You had all these sort of questions that would pop up. And then, you know, that was completely silenced. And then he's bringing that music back again. But the music he's actually bringing back at this very eerie, very weird, kind of sound and throughout my work actually with the Alexa I talk about the uncanny as a potential actually as an actually potential to reveal things that are don't quite work right that are not quite right and and this is what we're talking about when we talk about surveillance capitalism right this there is a moment in which we're confronted with these devices and there is something eerie about them and there's something that's not quite right and I think we should trusting that instinct in a way. And so, can the neoliberal ear be detourned?

Audrey Amsellem - You know, it's an open question, definitely. But I think where my work has taken me is more that I think we can regulate more. Really, it's where and maybe, you know, in a few years, I'll have a different answer. But I think at this stage, in my research, this is where I see the potential sort of of answers. There's a lot of stuff being done, you know, just yesterday the White House, it's a bit tangential, but the White House released a blueprint for an AI Bill of Rights. And so you have a bunch of organizations, lawyers, activists, artists, a lot of them you work with, right, who have hope and who build alternatives. Yeah, but for me, I think regulation is probably, you know, art is always going to be the most important for us to be able to think through and live through the situation. But as far as, you know, really kind of deturning in a practical way, I think regulation is really the main way to do it.

James Parker - Yeah. I mean, it's tempting to end there. I as a legal academic, I'm lacking in faith in regulation. Maybe I’m meant to say the opposite. Maybe I meant to say that I have a lot of faith in regulation. But I mean, yeah, we finished quite a few conversations on this question, not the what is to be done question as in like a program, but like how to even situate or handle the sort of the hope story.

Joel Stern - Like where you don't really want to see sort of the legal sort of infrastructure as an answer.

James Parker - You know, because you know, one obviously one thing to say would be well, you begin with copyright and property as the way that we arrive at Spotify and Alexa and whatever and you know, the establishment of private property and its sort of defense, or infrastructuralization by the legal system in concert with capital. That's the originary moment of global capitalism in a certain kind of crude way of thinking. And so it's like law isn't the answer, right? I'm not saying that you think that law is the answer, but it's just like it's hard to... you know, it's interesting because Shoshana Zuboff absolutely makes the argument that regulation is the answer… and a lot of people have been critical of her on those grounds, but that's partly because she's defending capitalism. She just wants a less surveillant capitalism, whereas I don't, I definitely don't read you as defending capitalism or neoliberalism. So, so yeah, but it's just, yeah, I don't know, I haven't really got anything to say. It's completely unresolved for me. We certainly need more regulation, but I don't know, I just don't know where else to point my energies.

Audrey Amsellem - No, yeah, I think, you know, I mean, I know what you mean, you know, copyright law was also, you know, invented in a very specific kind of government, right? I mean, this was, it was invented for censorship, right? We're not quite like that anymore, thankfully, but it's true. It's true that there are limits to what the law can do. There are limits to what regulation can do. And I think at the end, it's individuals who are doing the best they can to think through these questions and trying to promote at the individual level what they think our lives should be like. And we do that through community. So I actually think initiatives like machine listening and various kinds of initiatives that I cite also in my work, this is the way to do it.

James Parker - Do you want to name check some of them?

Audrey Amsellem - So for example, Rethink Link is an organization that I am citing, specifically talking about LinkNYC. So any kind of community building initiatives that is going to inform people and is also going to empower people. Because as we talked throughout, it's just various forms of disempowerment. And I think any moment in which you can empower people with knowledge or with community or with art or whatever is, you know, little by little, I think that does produce significant change. And so that's, regulation is the more sort of practical way to do it. And I, but I, you know, I agree with you that there are other things that we have to look at. And there are other avenues that may not be sort of a direct, abrupt solution, but it's actually at least a way that we can cope with this, right?

Joel Stern - Yeah, consciousness raising, community organizing, cultural production.

James Parker - We also need to seize music as a Trojan horse for the revolution.

Audrey Amsellem – Absolutely.

James Parker - On that note, it's been fantastic to have to speak with you Audrey. I mean, yeah, you too. People can find your dissertation online. Is that right? Yeah, on ProQuest. Or show me at a DDU or something like that. We'll put it in the machine listening library.

Audrey Amsellem - Amazing. Thank you.