Generative Modelling for Controllable Piano Performance Audio Synthesis
Audio Examples
This page contains a set of audio samples in support of the paper. All pieces used here are unseen during training.
In our work, we generate realistic piano performance in audio domain that closely follows temporal conditioning of the two essential style features
for piano performances: articulation (legato or detached), and dynamics
(loud or soft). Below we demonstrate the model’s applicability of fine-grained style morphing over the course of synthesized audio,
which can be based on conditions sampled from the prior (Part 1), or inferred from other pieces (Part 2).
One of the envisioned use cases is to inspire creative and brand new interpretations for existing pieces of piano music.