I am a UKRI funded PhD researcher at the Centre for Digital Music (C4DM) at the School for Electrical Engineering and Computer Science at Queen Mary University of London and part of the Media and Arts Technology Centre for Doctoral Training . My research is titled Sketching Sounds: Using sound-shape associations to build a sketch-based sound synthesiser and it is routed in Music Computing, Human Computer Interaction, Computer Vision and Human Perception and Cognition. I am designing and conducting user studies to create a substantial dataset of sound-sketches and use accomplished statistical and novel machine learning methods for analysis.
Humans are surprisingly good at creating mental images of sound. However, sound-design tools are not always intuitive when it comes to realising an idea in a straightforward way.
When describing a sound, we often refer to associations with other areas like colour, movement or light. This project looks deeper into how people express their sound ideas through shapes and forms.
These findings are used to build a system that produces sound based on a simple sketch input. This approach allows anyone to explore, find and produce sounds in a simple and direct way.
You can have a look at this GitHub repository for more detailed information or check out my publications.