Communicating with an AI Improvisor

As music AI systems become more interactive, a major challenge is for both human and machine to better understand each other. When human musicians improvise together, communication is a natural and intuitive process, as performers make use of non-musical cues and communication to keep things flowing. But when your improvisational partner isn’t human, this natural communication breaks down. How do you know what the AI is “thinking” or “feeling” if its just a black box?

In this research we investigated how musicians and artificially intelligent music improvisors can keep a performance flowing through non-musical communication. We trained a neural network drummer to improvise with human musicians and explored bi-directional communication between human and machine performers.

The AI improvisor communicated its confidence in the performance using a simple animated emoticon. When the AI lacks confidence it avoids eye contact and frowns. When confident it looks at the human performer and smiles. Sustained confidence causes the emoticon to glow. We communicated the human performer’s state to the AI by measuring skin conductivity during the performance. Prior research had demonstrated that skin conductivity can be used as a proxy for being in musical flow states.

In a Silent Way: a short video explaining the research
When the AI improvisor lacks confidence in the performance it avoids eye contact and frowns

Our results show that communication from machine to human improves the human player’s flow and the quality of the improvisations. The communication from human to machine results were inconclusive however, and more research is needed to establish effective human to machine communication.

Our longer-term aim is to build AI systems that are active and responsive players in improvised musical performances with human musicians. We deliberately avoided the use of over-anthropomorphised representations of the AI, so as to avoid mistaken conceptualisations of artificial intelligence as human.

Project outcomes

The paper describing this research in detail was presented at this year’s ACM CHI Conference in Glasgow, 4–9 May, 2019. You can download a preprint of the paper from arxiv.

This research was supported by an Australian Research Council grants DP160100166 and FT170100033.

Project members

Jon McCormack
Toby Gifford
Patrick Hutchings

Related Projects

Mirror Ritual

Creative AI

Creativity
AI
Creative AI

Improvising with an AI Drummer

Creative AI

Improvisation
Music
Creativity

Adaptive music for immersive experiences

Creative AI

Music
AI
immersive

Interfaces for improvisation

Creative AI

Improvisation
Creativity
AI