Our devices are listening to us. Previous generations of audio-technology transmitted, recorded or manipulated sound. Today our digital voice assistants, smart speakers and a growing range of related technologies are increasingly able to analyse and respond to it as well. Scientists and engineers increasingly refer to this as “machine listening”, though the first widespread use of the term was in computer music. Machine listening is much more than just a new scientific discipline or vein of technical innovation however. It is also an emergent field of knowledge-power, of data extraction and colonialism, of capital accumulation, automation and control. It demands critical and artistic attention.
Established in 2020 by artist-researchers Sean Dockray, James Parker, and Joel Stern, Machine Listening is a platform for collaborative research and artistic experimentation, focused on the political and aesthetic dimensions of the computation of sound and speech.
The collective works across diverse media and modes of production.
In addition to research, writing, and artworks, Machine Listening have produced an expanded curriculum, conceived as an experiment in collective learning and community formation; an online library and interview series; numerous on-and-offline events, lectures, performances; and, a browser-based instrument for composing with audio and video via text.
Machine Listening emerged out of our previous work on Eavesdropping.