Live coding in Mexico

Centro Multimedia is a space for arts research in new technologies. The have an audio workshop with a special interest in code and FLOSS (“software libre”).
The history of live code:

  • 3 concerts in 2006 by an experimental laptop band called mU.
  • Another concert in 2009
  • A telematic concert in 2009
  • A supercollider course since 2007 in the audio workshop and also Fluxus since 2010 – this grew a code community
  • They had a collective live coding session in 2010 just after the first fluxus course

They had used MAX. Later, SuperCollider changed everything because of the philosophy of Open Source. It was free and legal to share. They felt a sense of ownership and it grew a community.
since 2011, they’ve organised 21 live coding events. They do collaborations with other institutions in Mexico City. This is a local scene.
At the National Autonomous University, they did a blank slate coding sessions where everybody had 9 minutes. This was especially beneficial for coding practice of participants.
There was a Vivo conference in 2012 which had more participation form overseas, with longer time slots and had some non-blank slate code, which also caused an explosion in the community.
Their audiences are very diverse with a lot of new people coming in. They are receptive to new ideas.
They are now doing a series of live coding concerts that also mix practices, so with dance, or with circuit bending, sound art, poetry, etc. There now a website hackpact.mx, which has a philosophy of live coding. These projects grow community. Sharing builds personal relationships and knowledge. People form many backgrounds are involved.

Questions

What else goes on at the centre? Lots and lots of new media stuff. They have artistic residence programs. Once specific to Germany. One for Latin Americans. There is a electronic and video festival this year with an open call.
The centre is free, so anyone can come and learn without paying. This increases diversity.
Does anybody in the US or Canada pay attention to what’s going on in Mexico? Artists from the Canada can come for residences, so there is some collaboration there. There are some collaborations with the US through other institutions, but not this one.
Do they do any teaching of coding or live coding in schools? There is not official school of electronic music in Mexico, so teaching mostly happens through workshops. Mexicans who want to do electronic music degrees go abroad. There is not a strong programme for children or teenagers during school time. They do some workshops in summer. They may expand this, but need to do some work on pedagogy. They have also been running some workshops with indigenous people who have no background at all with computers. Sometimes they learn faster because they don’t know it’s supposed to be difficult.
what’s the future of live coding in Mexico? More people, more groups. The future is bright across Mexico for live coding.

Live blogging Live.Code.Festival: Yiorgos Diapoulis – Live Hardware coding

He’s build some sort of binary adding machine that plays sounds based on the current number, which adds to the total every clock cycle. IT creates patterns based on the total not including overflow. The use rprovides a 3 bit word to the counter. Te counter outputs a serial transmission to a decoder. Both of these things are connected to an Ardunio, which is connected to SuperCollider. The counter outputs 3 bits to the ardunio. the decode does one bit?

Battery dying!

Live blogging Live.Code.Festival: Benoit and the Mandelbrots by Mattias Schneiderbanger

Drop function – executed simultaneously for all 4 players.
They have done blank slate live coding in many environments. They also use live coding as a compositional method, so do some shows where they just use a code interface developed in rehearsals.
Delbrots and the Man also develop code live during rehearsals and use that as an interface for performance. They sync with the drummer via click track and send their chat window to him via a text-to-speech synthesiser.
If they want the audience to dance, they start with prepared stuff. They also try to think of the arc of the whole evening. In rehearsals, they would pick a random genre from Id3 tags.

More General Thoughts on Live Coding

Live code does not represent a score. code consists of algorithms which are specific, but a a score is interpretable in different ways. Also the text document generated by live coding it not an adequate artefact to repeat a performance
Code allows for de-heirarchicalisation of all musical parameters. Traditional composition focusses on pitch and duration, but improv allows focus on other parts. Live coding emphasises this further.
Composition creates a text – an artefact designed to enable people to create sound. It is prepared and worked out. Live coding does not necessarily generate a written composition. However, in the 21st century, improv and composition are not binary oppositions, something which also applied to live coding.

Questions

Did they publish the silent movie with their sound track? Not yet, because they’re not sure about copyright.
what’s next for the Mandelbrots? Will they make a ton of recordings? Recordings do no change their approach. They only record only their rehearsals.
do they program differently when they’re recording? No, they’ve gotten used to just recording all their rehearsals.
Will they edit their recordings? Unsure.
Will an audience expect them to sound like their records? They can’t know yet.
Do they put performances online? They’ve done that twice. Once to Mexico

LiveBlogging: Modality – modal control in SuperCollider

by many people

Modality is a loose collaboration to make a toolkit to hook up controllers to SC.  Does mapping, including some complex stuff and some on-the-fly stuff.

Marije spoke a bit of how they began collaborating

Concept – support many devices over many protocols. Make a common interface. Easily remap.

Devices

They currently support MIDI and HID. the common interface is MKtl. Provides a system to process the data. They have templates. Templates for common ways of processing. Same interface for MKtl and MDispatch. (they may move to FRP (I don’t know what that is))

Ktl quark is out of date.

(I think I might be interested in contributing to this project – or at least provide templates for stuff)

Different protocol have different transport mechanisms. Things very by OS. Different controllers have different semantics.

A general solution is not trivial.

Scaling is different on different OSes. Names of devices may have variations. MIDI has some device name issues.  real MIDI (non-usb) will not report their names, but use MIDI ports.  Similar issues will arise with OSC or SerialPort. 

The device description index is an identity dictionary. It’s got some NanoKontrol stuff in it. I am definitely interested in this…

They’ve got some templates, but it’s still a bit vapourware.

For every button or input on your device, they define what it is, where it is, etc.  This is good stuff.  You can also set the I/O type.

Device descriptions have names, specifications, platform differences, hierarchical naming (for use in pattern-matching). You can programmatically fill in the description

nanoKontrol, Gamepad, DanceMat, a bunch of things.

Events and signals

Functional reactive processing. Events, data flow, change propogation. FRP – functional reactive programming

These are functions without sideFX until you get to the output phase.

In the FP Quark – functional programming Quark.

Events are encoded in an event stream.  Event Source with a do method adds a side effect.  When somethng happens (is “fired”), do the do.  Only event sources can be fired.

the network starts with an event source. 

Signals are similar but have state? You can ask for the value and change it.

To create the network use combinators.

inject has state internally.

Dynamic Event Switching limits and event depending on a selector.  this is kind of like the gate thing in max.

With Modality, every control has an elements, every element has a singal and a source. Controls have keys.

You can combine values, attach stuff to knob changes. Easy to attach event streams to functions.

this is complex to describe, but works intuitively in practice.  You can do deltas, accumulators, etc.

Closing remarks

this is on github, but it not yet released.  depends on the FP quark.

Needs gui replacements.  Needs a backend for OSC devices.

Needs some hackin in the SC source.

Questions

  • Would you be interested in doing the descriptors in JSON, so it can be used by non-SC guys? Yeah, why not.  This is a good plan, even.

Liveblogging the Sc symposium: Overtone Library

Collaborative programmable music. Runs in LISP (dialect of LISP?) that runs in the JVM.  It’s got concurrency stuff. It’s programmable. It runs in Clojure.

Deals with the SC server.  This sort of looks like it’s running in emacs…

All SC Ugens are available.  He built a bunch of metadata for this, a lot like the SC classes for the Ugens.  There is in-line documentation, which is nice.  The Node-tree shows all currently running UGens.

Midi events are received as events and can be used by any function. Wiggle your nano controller.  This came with the JVM.  So all Java libraries are supported.  OSC support. Serial support.

Synth code and musical expression code can be written in the same language.  Specify phrases in a score, concat them.  The language is relatively readable. as far as lisp goes.  Most things are immutable, so this is good for concurrence. Too many variables can confuse the programmer.

He’s using a monome. Every button call has a function, which has the X,Y coordinate, whether it’s pressed or released and a history of all other button presses.

Now he’s doing some mono-controlled dubstep.

C-Gens are re-usable UGen trees, possible a bit like synthdefs. Can do groups also.

This can also use Processing.org stuff, because it’s got java.  OpenGL graphics also supported. They can hook into any UGen

Anything can be glued together.

This is kind of cool. But you need to deal with both java and lisp.

Questions

  • Collaboration?  It helps you deal with shared state, without blocking or locking.

LiveBlogging SC: Mx

by Chris Satinger (aka Felix Crucial)

Mx is a tool for connecting objects together.  audio, control, midi etc

Anything that plays on a bus, the bus can go in and it can be put on a mixer.

This mixer is a GUI thing. You can use it just to glue on things like fadeouts or amplitude control.

Just write a descriptor file.

The system is not the gui, it’s the patching framework.

You can patch synthdefs together. and edit the synthdefs on the fly.

This patches things a wee bit like PD.

It checks for bad values and prevents explosions.

There is no time line system. It’s a hosting system and only manages connections and starts and stops. You can put in other timelines

It uses environment variables. ~this is the unit.

~this.sched(32, { … }, { … })

You can put documents in the Mx. Those can change the Mx as it runs, so it’s all very self-modifying. (When I was an undergrad, they told me this was naughty, but like many other naughty things, it can be very cool.)

Things have outlets and inlets that you can connect.   There is apparently a querying system which we will learn about.

He gets good music out of the system despite having no idea what’s going on a lot of the time

Dragging cables is fun for a while, but then…

Questions

  • Adaptors? The describe what an object is and describes the inlets and outlets.  There’s also a system for announcements. Cable strategies also define behaviours.

Liveblogging SC: live coding with assembler

Dave – 

Esoteric programming languages are an interesting thing we might care about.

CPUs in mine craft – you can see the processing.

Space invaders assembler with lines showing the order of execution.

Very slow execution can show what’s going on. This can be sonified.

 Till – 

BetaBlocker is a quark in sc3-plugins

(talk to him if you want to go work in helsinki)

BBlocker never crashes, but it  might not do anything.  It has a stack and a heap and a program counter.

This is like Dave’s grid on the DS, where it runs in an infinite loop.

UGens

DetaBlockerBuf – is a demand rate UGen. So you can do weird computations in your ugen?  It does a programming step everytime it gets triggered.

The programs are stored in buffers. You can do random ones.

There is also a visual thingee.

BBlockerBuf exposes the stack and the program counter.

BBlockerProgram holds a beta blocker program for the assembler. 

You can create a program with the assembler code.  you can play the program.

BetaBlockerProgram([NOP, POP, ADD]) etc

Tom Hall – 

John Cage would be 100 this year.

A metaphorically digital, constrained, sonic system. An invitation to listen

Questions

  • Is the heap a wave table? No, the output of the program is the sound.
  • Is it a coincidence that it sounds like putting a induction coil on a laptop?  Um, maybe. He says it sounds very 8-bit-y. Maybe because it’s 8bit.
  • Is it easy to write logical seeming programs, or are they mostly random? It is possible to write things that make sense. The fun of it is the weirdness and things getting trashed by accident.  Dave is going genetic programing with a system like this.
  • The output is one byte at a time? No, each step does something and the output is something I didn’t understand.
  • Graphics question? Not Till’s field.

I think this could be really useful for student or teenagers who are sort of intereted in programming.

LiveBloggin the SC symposium: Keynote – Takeko Akamatsu

Using SC since 2000.

Main project is Craftwife. (All members are housewives, she says).  Going since 2008.  There are 5 members now. They are between pop and art culture.

She started initially doing demos of Remkon, an iOS OSC app.  How to make this popular? 

  • Borrow the image of something already famous – Karftwerk.
  • What is Originality? – SC patterns
  • Crash of music industry – live to record, record to live. Craftwife should be live only

Influenced by “the Work of Art in the Age of Mechanical Reproduction”

She makes extensive use of PatternProxies

She also works with Craftwife + Kaeso+.  Kaseo+ is a circuit bender.  she controls strobe lights, analogue synthesier, etc.

SuperCollider.jp

SC in Japan. They have a meetup in Tokyo. She posts on twitter. She does workshops.

During her show in the Hague in 2007, she got frustrated and smashed her computer. And then quit making computer music for a year and grew vegetables.

She held a workshop at a place called the WombLounge.  Not everyone was a musician. She covered interaction between many environments.

SuperColliderSpeedCodingShow

She will give people a theme and five minutes and they have to make a sound.

4 people are quickly coding something on the theme of spring.

SuperCollider.future

She wants the book in an eBook in Japanese.

SuperCollider.cycling

She has attached a sensor to her exercise bike and uses this during her workout routine.

She’s tired of loud sounds. And sound systems are annoying.

She played a video of JMC saying what he wants for sc4. It’s not client server and it’s a lot smaller.

Liveblogging the SuperCollider Symposium: SC AU UI

by Jan Trüzschler and Zlatko Brackski

This is the SuperCollider Audio Unit User Interface Library, which enables the creation of custom user interfaces for Audio Units built in SC.

You can use AU stuff in live or Logic and having a nice GUI can enhance the user experience.  Mapping controls can increase the complexity possible with the AU library.

This is MAC-only, as it uses Objective C.

The interface has some grey boxes and is editable. 

This is not yet added to the main SCAU library yet, as it needs to be merged with the SCAU lib.  The UI library needs some work. There needs to be some documentation.

Examples

This would be cool, but the GUI is really obtuse. 

You can download this stuff from BCU via TEE DMT. Or this will be released in a more normal way.

Questions

  • Where is the lovely GUI coming from? Objective C, so you can’t do your own version in SuperCollider
  • Why is this a one-time library install rather than packaged in the component? Jan thought it would be easier to do an installer.  They’re not difficult to distribute.
  • Can the AUUI controller thing use sidechains? Not yet.

Live Blogging the SuperCollider Symposium: Freesound Quark

By Gerard Roma

Uses the Freesound Website. www.freesound.org. The sounds are Creative Commons.  The website has more than 150,000 sounds from around 4000 users.  Most users only download sounds.  All sounds are moderated – listened to by a human.

I’m always charmed when a presenter shows a supercollider window rather than using a slide programme.  The syntax highlighting of their talk notes is especially good.

Google gave them a grant and they re-wrote the site.  They have a feature extraction library to analyse the sounds.

There is a new freesound quark based on their API.  The quark will give you the sound, the sound’s preview, the tags, the spectrogram, the signal descriptors from freesound’s feature extraction.

You need to get an API key to use the quark. The quark will search stuff for you according to filters. You can find a sound that’s glitchy with a particular duration.  You can search by similarity as well.

The analysis frames of the sound are kept in a separate file, but can be loaded into an IdentityDictionary.

This quark could be really interesting if you want to do stuff with freesound, you don’t need to do your own MIR and you might be able to make cool pieces in real time.

Questions

  • Are people doing cool things with this outside of SuperCollider?  He doesn’t know.
  • Will the API upload to freesound? No.  The API needs some more authentication stuff put in. Also the moderation creates a delay.
  • Zlatko wants to know about how they know if sounds are copyrighted.  The moderators try to figure it out and respond to complaints.
  • Can the same API key be used across multiple computers? Yes.
  • Does the metadata include the licence terms and the user who uploaded it? Yes
  • Is there a GUI? No, this is a new quark, not the old one.