A live duet between piano and image.

Visuals are performed in real time in dialogue with the pianist, held between control and the need to let go.

Mais ou?

2026 - Palais des Beaux Arts de Lille

Duration : 10 minutes
Type : Interactive Performance
Role : Creation
Tools : TouchDesigner

Credits

Creation : Ruby-Maude Rioux
Pianist : Frédéric Volanti

Production : Rencontres AudioVisuelles

Supported by :

  • Union européenne (FEDER) 

  • Région Hauts-de-France 

  • Communauté d’Agglomération de La Porte du Hainaut 

  • Arenberg Creative Mine

Residency Support Team - VideoMapping Festival

Production Manager : Benjamin Duran & Sabine Costa
Technical Supervisor : Stéphanie Léonard
Artistic Supervisor : Sylvain Pouillard

Technical Team - VideoMapping Festival

Régie : Loïc Lefrileux
Responsable vidéo : Nicolas Camarty
Assistante vidéo : Eliette Gibaud
Son : Clément Dubois

with special thanks to :
Antoine Manier for initiating the musical collaboration with Frederic Volanti
Morgan Rio Photographe for the great photos
Darius Rabbi for his generosity of knowledge and expertise
Anne-Sophie Marquant for her help with audio conceptualisation

An immersive audiovisual performance where the pianist’s hands, movement, and sound generate a living visual narrative : unfolding a poetic inquiry into where we go when it all ends.

Presented at the Palais des Beaux-Arts de Lille as part of the Festival International du Vidéo Mapping de Lille.


A grand piano stands at the center of the atrium, facing the audience. Around it, the architecture becomes a projection surface, expanding the performance into a spatial form of synesthesia.

As the first notes of Mad Rush by Philip Glass emerge, a visual narrative unfolds. Generated in real time, the visuals emanate from the pianist’s hands, captured live and translated into a luminous presence at architectural scale. The touch of each note resonates simultaneously as sound in the space and as color across the surfaces, extending the gesture beyond the instrument.

The visuals are not fixed. They are performed live : continuously shaped, adjusted, and reinterpreted in response to the music and the energy of the moment. Each performance becomes a singular configuration, where the image evolves alongside the pianist’s playing.

Structured in three movements—exaltation, rupture, and recomposition—the piece moves through attachment, loss, and reemergence, later opening into improvisation and culminating with L’Isle Joyeuse by Claude Debussy.

The audience remains anchored to the physical presence of the performer while being immersed in a shifting visual field, where sound and movement continuously unfold into color, matter, and space.

The performance is designed for multiple iterations per evening, each unfolding uniquely.

Creative process

Creative process

The work was conceived during a five-day residency, where the initial direction, structure, and collaborative approach were established in close dialogue with the pianist. This first encounter set the foundation for a process grounded in improvisation and experimentation. A three-part structure was defined early, setting a trajectory for the piece while leaving space for variation. The system was designed as a framework capable of holding both control and flow—where gesture and sound initiate events, but the visual matter continues, transforms, and evolves beyond direct intention.

The development relied on iterative prototyping: testing camera placement, gesture tracking, and audio analysis through recorded material and simulations. A significant part of the process involved reduction—removing excess visual complexity to retain only the behaviors that could sustain a precise and responsive relationship with the performance.

Final calibrations were completed on site, adapting the system to the acoustics of the piano and the scale of the architecture. Each performance remained open, with visuals performed live and continuously reshaped in dialogue with the music.

Technical approach

The performance combines real-time hand tracking and audio analysis to drive a generative visual system. A camera positioned above the piano captures the pianist’s hands, while the musical signal is analyzed for notes, intensity, and dynamic variations.

These inputs are translated into visual behaviors at architectural scale. Gestures trigger, rhythm, and deform the image, while sound activates and feeds the generative flow. Rather than producing fixed reactions, both inputs set conditions from which the visual matter unfolds—stretching, dispersing, and reorganizing over time.

The system is built as a structured environment capable of sustaining transformation. Particle simulations, geometric fields, and custom visual processes allow transitions between precise and fluid states, maintaining continuity across the performance.

Alongside this, the visuals are performed live. The system remains open to intervention, enabling continuous adjustment in response to the music and the energy of each moment. The result is a performative image shaped by the interplay between gesture, sound, and a generative current that exceeds direct control.

Next
Next

Champs Chromatiques