Dan Stowell and Alex Shaw: SuperCollider and Android

Still live blogging the Sc symposium
their subtitle: “kickass sound, open platform.”
Android is an open platform for phones, curated by google. Linux w java. it’s not normal linux, though. it’s NOT APPLE.
phones are computers these days. they’re well-connected and have a million sensors, w microphones and speakers. Andriods multitask, it’s more open, libraries and APKs are sharable.
Downsides is that it’s less mature and has some performance issues w audio.
sccynth on android. The audio engine can be put in all kinds of places. So the server has been ported. The lang has not yet been ported. So to use it, you could write a java app and use it as an audio engine. Can control remotely, or control it from another android app. ScalaCollider, for example.
Alex is an android developer. Every android app has an “activity” which is an app thingee on the desktop. Also has services, which is like a daemon, which is deployed as part of an app and persists in the background. An intent is a loosely-coupled message. AIDL is Android Interface Definition Language, in which a service says what kinds of messages it understands. The OS will handle the binding
things you can do w supercollider on android: write cool apps that do audio. making instruments, for example. He’s playing a demo of an app that says “satan” and is apparently addictive. You can write reactive music players (yay). Since you can multitask, you can keep running this as you text people or whatever.
what languages to use? sclang to pre-prepare synthdefs, OSC and java for the UI.
A quick demo! Create an activity in Eclipse!
Create a new project. Pick a target w/ a lower number for increased interoperability. Music create an activity to have a UI. SDK version 4.Associate project w/ supercollider, by telling it to use it as a library. There are some icon collisions, so we’ll use the SC ones. Now open the automatically generated file. Add SCAudio object. When the activity is created, initialise the object.

 
public void onCreate 
 . . .
superCollider = new SCAudio("/data/data/com.hello.world/lib");
superCollider.start;
superCollider.sendMEssage(OscMEssage.createSynthMessage("default", 1000, 1, 0); // default synth
…
}

 . . .

@Override
public void onPause(){
 super.onPause();
supercollider.sendQuit();
}

Send it to the phone and holy crap that worked.
Beware of audio latency, 50 milliseconds. multitasking also.
Ron Kuivila wants to know if there are provisions for other kinds of hardware IO, kind of like the arduino. Something called bluesmurf is a possible client
Getting to the add store, just upload some stuff, fill out a form and it’s there. No curation.

Tim Blechman: Parallelising SuperCollider

Still live blogging the SC symposium
Single processors are not getting faster, so most development is going for multicore architectures. But, most computer music systems are sequential.
How to parallelise? Pipelining! Split the algorithm into stages. This introduces delay ad stuff goes from one processor to the other. Doesn’t scale well. Each stage would need to have around the same computational cost, also.
You could split blocks into smaller chunks. Pipeline must be filled and them emptied, which is a limit. Not all processors can be working all the time.
SuperCollider has special limitations in that OSC commands come at the control rate and the synth graph changes at that time. Thus no pipelining across control rate blocks. Also, there are small block sizes.
For automatic parallelisation, you have to do do dependency analysis. However, there are implicit dependencies with busses. The synth engine doesn’t know which resources are accesses by a synths. This can even depend on other synths. Resources can be accessed at audio rate. Very hard to tell dependencies ahead of time. Automatic parallelisation for supercollider might be impossible. You can do it with CSound because their instrument graphs are way more limited and the compiler knows what resources each one will be accessing. They just duplicate stuff when it seems like they might need it on both. This results in almost no speedup.
The goals for SC are to not change the language and to be real time safe. Pipelining is not going to work and automatic parallelisation is not feasible. So the solution is to parallelise not automatically and let the user sort it out. So try parallel groups.
Groups with no node ordering constraint, so they can be executed in parallel.
easy to use and understand and compatible with the existing group architecture. doesn’t break existing code. You can mix parallel groups with non-parallel ones.
the problems is that the user needs to figure stuff out and make sure it’s correct. Each node has two dependency relations. There is a node before every parallel group and a node afterwards.
This is not always optimal. Satellite nodes can be set to run before or after another node, so 2 new add actions.
There is an example that shows how this is cool. It could be optimised, so that some nodes have higher precedence.

Semantics

Satellite nodes are ordered in relation w one other node
Each node can have multiple satellite predecessors and satellite successors. They may have their own satellite nodes. They can be addressed by the parent group of their reference node. Their lifetime should relate to the lifetime of their reference node.
This is good because it increases the parallelism and is easier, but it more complicated.
Completely rewritten scsynth w a multiprocessor aware synthesis engine. Has good support for parallel groups, working on support for satellite nodes. Loads only slightly patches Ugens. Tested on linux, w more than 20 concerts. Compiles on OS X, might work. We’ll see. (linux is the future)
supernova is designed for low latency, real time. dependency graph representation has higher overhead. There’s a few microsecond delay.
For resource consistency, spinlocks have been added. Reading the same resource from parallel synths is safe. Writing may be safe. Out.ar is safe. Replace.ar might not be. The infrastructure is already part of the svn trunk.
(I’m wondering if this makes writing UGens harder?)
A graph of benchmarks for supernova. Scales well. Now a graph of average case speedup. W big synths speedup is nearly 4.
Proposed extensions: parallel groups, satellite nodes. Supernova is cool.
There is an article about tis on teh interweb, part of his MA thesis.
Scott Wilson wants to know about dependencies in satellite nodes. All of them have dependencies. Also wants to know if you need parallel nodes if you have satellite nodes. Answer: you need both.

Nick Collins: Acousmatic

continuing live blogging the SC symposium
He’s written Anti-aliasing Oscillators: BlitB3Saw – BLIT derived sawtooth. Twice as efficient as the current band-limited sawtooth. There’s a bunch of Ugens in the pack. The delay lines are good, apparently

Auditory Modelling plugin pack – Meddis models choclear implants.(!)
Try out something called Impromptu, which is a good programming environment for audio-visual programming. You can re-write ugens on the fly.(!)

Kling Klang

(If Nick Collins ever decided to be an evil genius, the world would be in trouble)

{ SinOsc.ar* ClangUgen.ar(SoundIn.ar)}.play

The Clang Ugen is undefined. He’s got a thing that opens a C editor window. He can write the Ugen and then run it. Maybe, I think. His demo has just crashed.
Ok, so you can edit a C file and load it into SC without recompiling, etc. Useful for livecoding gigs, if you’re scarily smart, or for debugging sorts of things.

Auto acousmatic

Automatic generation of electroacoustic works. Integrate machine listening into composition process. Algorithmic processes are used by electroacoustic composers, so take that as far as possible. Also involves studying the design cycle of pieces.
the setup requires knowing the output number of channels the duration and some input samples.
In bottom up construction, sources files are analysed to find interesting bits, those parts are processed and the used again as input. The output files are scattered across the work. Uses onset detection, finding dominant frequency, excluding silence, other machine listening ugens.
Generative effect processing like granulations.
top down construction imposes musical form. Cross-synthesis options for this.this needs to run in non-real time, since this will take a lot of processing. There’s a lot of server-> language communication, done w/ Logger currently.
How to evaluate the output: Tell people that it’s not machine composed and play it for people, and then ask how they like it. It’s been entering electroacoustic compositions. Need to know the normal probability of rejection. He normally gets rejected 36% of the time (he’s doing better than me).
He’s sending things he hasn’t listened to, to avoid cherry picking.
Example work: fibbermegibbet20
A self-analysing critic is a hard problem for machine listening
this is only a prototype. The real evil plan to put us all out of business is coming soon.
The example work is 55 seconds long ABA form. The program has rules for section overlap to create a sense of drama. It has a database of gestures. The rules are contained in a bunch of SC classes, based on his personal preferences. Will there be presets, i.e., sound like Birmingham? Maybe.
Scott Wilson is hoping this forces people to stop writing electroacoustic works. Phrased as “forces people to think about other things.” He sees it as intelligent batch processing.
The version he rendered during the talk is 60 seconds long, completely different than the other one and certainly adequate as an acousmatic work.
Will this be the end of acousmatic composing? We can only hope.

Live blogging the Supercollider Symposium:: Hannes Hoezl: Sounds, Spaces, Listening

Maifesta, “European Nomad Art Biennale” takes places in European non-capital cities every 2 years. The next is in Murcia, Span, 2010
No 7 was in 2008 in italy, in 4 locations.
(This talk is having technical issues and it wounds like somebody is drilling the ceiling.)
The locations are along Hannibal’s route with the elephants. Napoleon went through there? It used to be part of the Austrian empire. The locals were not into Napoleon and launched a resistance against him. The “farmer’s army” defeated the French 3 times.
(I think this presentation might also be an artwork. I don’t understand what is going on.)
Every year, the locals light a fire in the shape of a cross on the mountain, commemorating their victories.
The passages were narrow and steep and the local dropped stones on the army, engaging in “site specific” tactics. One of the narrowest spots was Fortezza, which was also a site for manifesta. There is a fortress there, built afterwards, the blocks the entire passage. There is now a lake beside there, created by Mussolini for hydroelectric power. The fortress takes up 1 square kilometre.
there is a very long subterranean tunnel connecting the 3 parts of the fort.
(He has now switched something off and the noise has greatly decreased)
The fortress was built after the 1809 shock. But nobody has ever attacked it. There was military there until 2002. They used it to hold weapons. The border doesn’t need to be gaurded anymore.
during ww2, it held the gold reserves from the Bank of Rome
The manifesta was the first major civilian use. None of the nearby villages had previously been allowed to access the space.
The other 3 manifesta locations were real cities. Each had their own curatorial team. They collaborated on the fortress
The fortress’ exhibition’s theme was imaginary scenarios, because that’s basically the story of the never-attacked fort.
The fortress has a bunch of rooms around the perimeter, with cannons in them, designed to get the smoke out very quickly.
We live our lives in highly designed spaces, where architects have made up a bunch of scenarios on how the space will be used and then design it to accommodate that purpose.
the exhibition was “immaterial” using recordings, texts, light
There were 10 text contributors. A team did the readings and recordings. Poets, theatre writers, etc.
The sound installations were for active listening, movement, site specific.
He wanted to do small listening stations where a very few people can hear the text clearly, as there are unlikely to be crowds and the space was acoustically weird. The installations needed to have text intelligibility. They needed to be in english, italian and german, thus there were 30 recordings.
The sound artist involved focusses on sound and space. The dramatic team focusses on the user experience design.
(Now he’s showing a video os setting up a megaphone in a cannon window. It is a consonant cannon. Filters the consonants of one of the texts and just plays the clicks. He was playing this behind him during the first part of the talk, which explains some of the strange noises. In one of the rooms, they buried the speakers in the dirt floor/ In another room, they did a tin can telephone sort of thing with transducers attached to string. Another room has the speakers in the chairs. Another had transducers on hanging plexiglass. The last one they had the sound along a corridor, where there was a speaker in every office, so the sound moved from one to the next.

more performance stuff

vincent rioux is now talking about his work with sc

he improvised with an avant sort of theatre company. The video documentation was cool. I didn’t know about events like this in paris. Want to know more.

in another project, he made very simple controllers with arduino inside. He had 6 controllers. One arduino for all 6.

Tiny speakers. This is also nifty. Used it at Pixelache festival.

the next project uses a light system. Uses a hypercube thing. Which is a huge thing that the dancer stands inside. Sc controls it.

the next thing is a street performance asking folks to help clean the street. Part of festival mal au pixel. This is mental! Also, near where i used to live. Man, i miss paris sometimes.

the next ne is a crazy steam punk dinner jacket. With a wiimote thing.

dan’s installation

dan st. Clair is talking about his awesome instillation which involves speakers hangong from trees doing bird like rendition of ‘like a virgin’, which is utterly tweaking out the local mocking birds.

when he was an undergrad he did a nifty project with songs stuck in people’s heads. It was conxeptual and not musical.

when he lived in chicago he did a map of muzak in stores on state street, including genre and de;ivery .ethod. He made a tourist brochure with muzak maps and put them in visitor centers.

he’s interested in popular music in environemtnal settings

max neuhaus did an unmarked, invisible sounds installation in time square. Dan dug the sort of invisible, discovery aspect.

his bird e.ulator is solar powered. Needs no cables. Has an 8bit microcontroller. They’re cheap as hell.

he’s loaded frequncy envelopes in memory. Fixed control rate. Uses a single wavetable oscillator httP://www.myplace.nu/avr/minidds/index.htm

he made recordings of birds and extracted the partials.

he throws this up into trees. However, neighbors got annoyed and called the cops or destroyed the speakers.

he’s working on a new version which is in close proximity to houses. He’s adding a calddendar to shut it down sometimes and amplitude controls.

he has an IFF class to deal with sdif and midi files. SDIFFrames class works with these files.

there’s some cool classes for fft, like FFTPeaks

he’s written some cool guis for finding partials.

his method of morphing between bird calls and pop songs is pretty brilliant.

dan is awesome

live video

sam pluta wrote some live video software. It’s inspired by glitchbot, meapsoft

glitchbot records sequnces and loops and stutters them. Records 16 bar phrase and loops and tweaks it. I think i have seen this. It can add beats and do subloops, etc

the sample does indeed sound glitchy

probability control can be clumsy in live performance. Live control of beats is hard.

MEAPSoft does reordering.

his piece from the last symposium used a sample bank which he can interpret and record his interpretting and then do stuff with that. So there are two layers of improvisation. It has a small initial parameter space nad uses a little source to make a lot of stuff.

i remember his pice from last time

what he learned from that was that it was good, especially for noisy music. And he controlled it by hitting a lot of keys which was awesome

he wrote an acoustic oiece using sound block. Live instruments can do looping differently, you can make the same note longer.

so he wrote michel chion’s book on film and was influenced. He started finding sound moments in films. And decided to use them for source material.

sci-fi films have the best sound, he says.

playing a lot of video clips in fast succession is hard, because you need a format that renders single frames quickly. Pixlet format is good for that.

audio video synch is hard with quicktime, so he loaded audio into sc and did a bridge to video with quartz composer.

qc is efficient at rendering

he wanted to make noisy loops, like to change them. You can’t buffer video loops in the same way, so he needed to create metaloops of playback information. So looped data.

a loop contains pointers to movies clips, but starts from where he last stopped. Which sounds right

he organized the loops by category, kissing, car chases, drones,etc

this is an interesting way of organizing and might help my floundering blake piece.

he varies loop duration based on the section of the piece.

live blog : beast mulch

Scott is talking about beast mulch, which is still unreleased,
there are calsses for controllers, like hardware. There’s a plugin framework to easily extend stuff. BMPulginSpec(‘name’, {|this| etc. . . .

multichannel stuff and swarm granulation, etc.

kd tree class finds closest speaker neighbor

if you want beastmulch, get it from scott’s website

there’s speaker classes, BMSpeaker

BMInOutArray does associations.

beast mulch is a big library for everyone. Everything must be named. There are time references, like a soundflile player;

trying to be adaptible. 100 channels or 8, make it work on both. Supports jit and stems.

a usage example: can be used live. Routing table, control matrixes. Pre and post processing use plugins

i NEED to download this & use it.

http://scottwilson.ca

http://www.beast.bham.ac.uk/research/mulch.shtml

. . .

timbral analysis

dan stowell is talking about beatboxing and machine listening

live blogging the supercollider symposium

analyze one signal and use it to control another. Pitch and amplitude are done. So let’s do timbre remapping.

extract features from sound, decorrelate and reduce dimensions, map it to a space. What features to use? Mfccs, spectral crest factors. That’s looking for peaks vs flatness.

his experiments use simulated degredation to make sure it works in performance.

voice works well with mfccs, but are not noise robust. Spectral crests are xomplimntaery and are npise robust. The two give you a lot of info.

a lot of different analysis give you useful information about perceptual differences.

now he’s talking about an 8bit chip and controlling it. Was this on boing boing or something recently?

spectral centroid 95th percentile of energy from the left ahows rolloff frequency.

he’s showing a video of the inside of his throat