ARME research data, models, and software resources

Several ARME datasets, software tools, and research repositories are now listed in the project’s public UKRI/Gateway to Research record. Together they document the data and technical infrastructure behind the project’s work on musical synchronisation, onset annotation, and augmented-reality rehearsal systems.

AR Asynchrony

The AR Asynchrony repository contains project files, data, and analysis scripts related to the AR synchrony study with videos. It supports researchers who want to test audiovisual asynchrony in virtual and augmented reality setups.

ARME Audio Recordings

The ARME recording corpus began with recording sessions in September 2021 and includes more than 750 individual recordings from over 250 performances. The recordings capture multiple playing styles, including concert-style performances and performances with different levels of musical expression. Part of this collection has been released through Virtuoso Strings, while other recordings remain internal ahead of future research outputs.

ARME GitHub account

The ARME GitHub organisation collects software, datasets, and analysis resources stemming from the grant. It provides a public discovery point for project repositories, including data resources and tools used in ARME demonstrations.

ARME Onset Annotations

The project has curated human labels of note events for recordings captured through ARME. These expert annotations address a known gap in music information retrieval: the shortage of annotated soft onsets in string instrument recordings. The public Researchfish/GtR record reports that more than 340 individual audio recordings have been annotated.

Haydn Annotation Dataset

The Haydn Annotation Dataset contains note onset annotations from 24 participants with varied musical experience. The annotation experiments use recordings from the ARME Virtuoso Strings Dataset and support research into best practices for annotating soft onsets.

Virtuoso Strings

Virtuoso Strings is a dataset of string ensemble recordings and onset annotations. The ISMIR 2023 Late-Breaking/Demo listing describes it as a dataset for timing analysis and automatic music transcription, including repeated recordings of quartet, trio, duet, and solo ensemble performances with different temporal expressions and leadership roles.

Adaptive Metronome

The Adaptive Metronome is an audio plug-in for simulating cooperative timing in music ensembles. It allows users to tap along with a virtual quartet while manipulating timing fluctuation and correction strength, making it useful both for demonstrations and for studies of interpersonal synchronisation.

Max Di Luca
Max Di Luca
Associate Professor

Max Di Luca is the lead PI of the project. He is Associate Professor at the University of Birmingham (UK) in the Centre for Computational Neuroscience and Cognitive Robotics.