Bruxist’s Mesh
Bruxist’s Mesh is a 5-dimensional networked improvisation piece that I developed for the Peabody Laptop Ensemble in late 2023. In early 2024, I had the opportunity to perform it in The Cube at Virginia Tech with a select group of Peabody improvisers as part of a satellite concert sponsored by SEAMUS.
The core of Bruxist’s Mesh is a patch using the modular synth emulator VCV Rack. It runs on a tangle of control and audio signals, which are familiar features in my other electronic works. Using a router and a Max/MSP-based routing and diffusion patch, I developed it into a piece where a group of performers is both fighting against and conforming to a collective gesture. I’ve realized lately that it’s most like a glitchy ouija board.
Within the VCV sound-generating patch, there are 4 democratized controls and 8 global controls for a total of 12 networked sound manipulating parameters. The DJ commands the global controls for each of the performers, which allows for the conducting of the piece through opening and manipulating various feedback and modulation paths. Each performer is assigned one of the 4 democratized controls. Their performance parameters are the XY movements of their mice and their assigned slider. The XY controls allow for the performers to have individual expressions, whereas the democratized controls shift in unison between the patches. The audio is routed out to the DJ, who then diffuses and manipulates the sound throughout the performance space.
The benefit of performing this in the Cube, aside from its 134.6 audio channels, is the physical size of the space. It is rare to be able to manipulate the azimuth of sounds so transparently in a space. The spatial controls allowed me to push at the performers’ intuitions and abilities to distinguish their contributions. Instead of confusion, the result was a literal neural network for performance. The bruxist will grind their teeth until the teeth cease to be.
The video for Bruxist’s Mesh was created by myself in July of 2024. I was able to get video shots of the performance at the CUBE from an overhead angle and from a 360 camera. The 360 camera’s exposure was too bright and clipped most of the colors under the stage lights. The overhead shots, however, looked very nice.
I was initially frustrated by my blunder of not recording the audio out from my laptop while going into the video-making process. That, combined with the overexposed footage, lead me to make something more psychedelic in nature rather than a boring documentation video. There’s not too much action that occurs with 5 people performing on a laptop.
I primarily used Adobe Premiere and After Effects to make this video. I animated multiple segments of the video with morphing distortions that correspond to the audio. For the most interesting shots in the video, I made a rudimentary Jitter patch in Max/MSP that shuffled frames based on sound descriptor data that I extracted using the FluCoMa toolkit. This idea was definitely inspired by Ted Moore’s Saccades, as he had presented the video for that piece at a Computer Music seminar.
The title card for the video was made by rendering out the card with multiple font variations and then re-assembling them into an animation. Lastly, I made some basic motion tracking shots to introduce each performer.
There are two versions of the video in this portfolio. The short version is black and white and is best for sampling this project, the long version is in color.