A machine learning approach to hierarchical categorization of auditory objects

William Coleman, Sarah Jane Delany, Ming Yan, Charlie Cullen

Research output: Contribution to journalArticlepeer-review

Abstract

With the advent of new audio delivery technologies comes opportunities and challenges for content creators and providers. The proliferation of consumption modes (stereo headphones, home cinema systems, ‘hearables’), media formats (mp3, CD, video and audio streaming), and content types (gaming, music, drama and current affairs broadcasting) has given rise to a complicated landscape where content must often be adapted for multiple end use scenarios. The concept of object-based audio envisages content delivery not via a fixed mix but as a series of auditory objects which can then be controlled either by consumers or content creators and providers via accompanying metadata. Such a separation of audio assets facilitates the concept of Variable Asset Compression (VAC) where the most important elements from a perceptual standpoint are prioritized before others. In order to implement such a system, however, insight is first required into what objects are most important and secondly how this importance changes over time. This paper investigates the first of these questions, the hierarchical classification of isolated auditory objects, using machine learning techniques. We present results that suggest audio object hierarchies can be successfully modeled and outline considerations for future research.

Original languageEnglish
Pages (from-to)48-56
Number of pages9
JournalAES: Journal of the Audio Engineering Society
Volume68
Issue number1-2
DOIs
Publication statusPublished - Feb 2020

Fingerprint

Dive into the research topics of 'A machine learning approach to hierarchical categorization of auditory objects'. Together they form a unique fingerprint.

Cite this