livecode, audio, browser, hybrids, janus, jitlib, laptop, ljublana, ljudmila-lab, presets, raspberry, remote-chaos, reset, sclang, sequencer, setups, tempo, the-supercollider, toplap-berlin-algorave, videos, webrtc, websockets, znaoakxhici

Hello everyone! So happy to find an inclusive community of livecoders and noise makers.
I'd like to introduce myself and my work by sharing two videos, the first one recorded last week during an online performance for the Toplap Berlin Algorave that happened as part of the rC3 (Remote Chaos Communication Congress).

I developed this environment during the past year, it started as part of the On-The-Fly residency I took part at Ljudmila Lab in Ljublana, the second video was recorded in the end of the residency

I will try to write a bit about the technical aspects of my environment.

What you see in the first video is running on the browser and supports multiple people playing together, the editor state is shared via websockets in real-time so everyone sees the same code, all commands are sent via OSC to a SuperCollider server running this custom "livecodable" sequencer, and sound is streamed back from SuperCollider to the browser using WebRTC through Janus, latency is pretty good using this setup, and it also enables for multi channel setups where different browsers / devices can receive a different audio channel (interesting for multiple people playing in the same room, laptop orchestras, etc). The SuperCollider sequencer is composed of "NodeProxies" in the server side, so everything is running in scsynth. I opted for that approach because it allows me to experiment with extreme tempo (somethng that would be very hard / CPU intensive with a sclang sequencer. Also this gives me some nice possibilities with JITLib presets and so on.

The idea was to create a live coding environment that pushes you to go in new directions and experiments while performing, so the text editor is non-linear, you create blocks of text anywhere and they will disappear with time giving space for new commands. You can also remove a block or reset the state removing all blocks. Every new input comes with some random valid code (as a suggestion) but you can simply ignore and write what you want. You can also edit past code of course. There is a random code generator (pressing the blue R) and a randomize() function that uses JITLib presets to randomly change the current global state of the sequencer in different amounts.

I plan to release this very soon as a framework so that anyone can use it to create their own web-based collaborative livecode setups. I think it would super be nice to have different livecode performances running directly on the browser instead of through streaming platforms / pure video. Also this would enable some more interactive performances, and some hybrids between online and IRL performances or installations.

For a different setup/example of what would be possible, in the second video at Ljudmila the visuals are running on three Raspberry Pi each connected to a different projector. The state is controlled by SuperCollider but each raspberry pi will interpret the commands it in a different way, enabling a multi-channel video setup.

Sorry for such a long introduction, it is a bit hard to describe the whole system but I feel like this is the write place to share :-) Thanks for reading and watching, looking forward to sharing/exchanging ideas and noises here in this community.

January 4, 2022

1 Comment • Newest first


Brilliant, thank you for coming by and sharing your work, Bruno. I have to say that the way your work is presented visually is really nice. That is, the code flashing by on top of you/everything behind is more appealing than simply scrolling by in the background. Looking forward to seeing more of your work!

Reply January 6, 2022