Robotic Blended Sonification: Consequential Robot Sound as Creative Material for Human-Robot Interaction
Current research in robotic sounds generally focuses on either masking the consequential sound produced by the robot or on sonifying data about the robot to create a synthetic robot sound. In this talk, I will present an approach to capture, modify, and utilise rather than mask the sounds that robots are already producing. In short, this approach relies on capturing a robot’s sounds, processing them according to contextual information (e.g., collaborators’ proximity or particular work sequences), and playing back the modified sound. Previous research indicates the usefulness of non-semantic, and even mechanical, sounds as a communication tool for conveying robotic affect and function. Adding to this, this alternative approach which makes two key contributions: (1) a technique for real-time capture and processing of consequential robot sounds, and (2) a tool to explore these sounds through direct human-robot interaction. Drawing on methodologies from design, human-robot interaction, and creative practice, the resulting ‘Robotic Blended Sonification’ is a concept which transforms the consequential robot sounds into a creative material that can be explored artistically and within application-based studies.