pattern stuff

More supercollider symposium live blogging

i missed the first session, alas. Too much devil liquor.

now we will hear about patterns. They speaker’s car’s registration plate says ‘rlpf . ar’

james harkins talk can be found on the internets.

patterns are about anstractions. You can write very concise cose that does things for you.

you can describe a stream’s behavior with a pattern. Some patterns can do values, others events.

pseries counts, pgeom does some multiplying.for exponential increments

there’s many random ones and some chaotic, etc

can pass these to each other, like using a patern to control step size in pseries

some list pattern things: pseq, prand, pxrand, pwrand, pshuf. Please note these should all have capital p’s, but i’m blogging from an n800.

list patterns send the message embedInStrem to everything in its array, so you can have them nested.

an event in something that understands the message play. You can write your own play function. Must check this out.

you can mess with the events innards

synthdefs must be stored. Or go into the synthdesclib

event patterns are also list patterns. We’ve got pbind, pmono, etc.

p=pbind(foo, 1).asStream; p.next(()); //must pass in an empty event when you call next and the empty paren set is an empty event.

.x is and adverb thing. T does matrix operations on lists, kinds, element by element

pkey is cool. It gives you the result of prevous event thingee

patternproxy is a part of jitlib. It lets you chanhe patterns on the fly

merging patterns, he calls pattern composition.

you can put your own function in patterms.

a magic square is like sudoku for notes

now we are being evangelized.

now ron is up to talk about some extra pattern classes.

he notes patterns are sometimes unattractive to programmers. The library is too big. Writing you own functions in it is hard.

ron haw written a class Pspawner wich lets you do stuff in a more prommer.y way. It takes a function and gets this as an argument

this cool, it’s a way to scedule paterns. I will use this. This fixes a bunch of my problems. Yay ron.

but now he’s in overly complicated example ron land.

ok, now we’re on to cvs and conductors. Cvs do constraints. They’re cool. The conductor is a dictionary and a gui. It’s got a bunch of cvs.

conductors are nifty. They can control a pattern. Cvs can now take a wndex message which treats it as a weight table.

there’s a cursor thing now.

there’s some array stuff going on in cvs

streams do next, reset, embedInStream. Patterns do asStream, embedInStream

routines have state. There’s something important in yield vs embedInStream which went by quickly.

timbral analysis

dan stowell is talking about beatboxing and machine listening

live blogging the supercollider symposium

analyze one signal and use it to control another. Pitch and amplitude are done. So let’s do timbre remapping.

extract features from sound, decorrelate and reduce dimensions, map it to a space. What features to use? Mfccs, spectral crest factors. That’s looking for peaks vs flatness.

his experiments use simulated degredation to make sure it works in performance.

voice works well with mfccs, but are not noise robust. Spectral crests are xomplimntaery and are npise robust. The two give you a lot of info.

a lot of different analysis give you useful information about perceptual differences.

now he’s talking about an 8bit chip and controlling it. Was this on boing boing or something recently?

spectral centroid 95th percentile of energy from the left ahows rolloff frequency.

he’s showing a video of the inside of his throat

timbral analysis

nick collins is talking about timbral analysis and phase vocoders, which is supercollider-ese for ffts.

i missed the first couple of minutes of this becasue there is an installarion outside of solar-powered speakers in trees, doing bird song ;ike sounds, which played madonna’s ‘like a virgin’ when i walked by and i had to fall over laughing. Hahahah

ok, back to the present AtsSynth does some cool stuff with pitch shifting.

scott wilson’s ugens do loris stuff. Which is noise modulated sine tones. Sinusoidal peak detection.

TPV ugen does pure sinsoidal stuff. Sines and phases. Takes an fft chain input and creates sine outputs with resynthesis. Finds n peaks and uses that number of sinusoids. This is cool. And is part of sc 3.3

SMS is spectral modelling synthesis. Sines plus noise. This is slightly expensive. But it preserves formants in repitching. So it sounds right with shifting speech.

good stuff!

theory continued

theory continued.

time point synthesis. Babbit wanted to serialize parameters in addition to pitch. He used durational sets, which becomes dull. And doesn’t transform well.

instead use integers to map to a table of durations. Your grid has 12 durations just cuz. Andrew Mead did some work on this.

there is a class TimePoints. Which is an array.

this a rythm lib. I should look into this.

we’re listening to ‘homily’ by babbitt, which uses these kinds of transformations.

and the code isn’t on the internets.

and now virtual gamelan graz

this is an attempt to model everything about gamelan.

tuning: well, don’t model everything, just the metalaphones. The tuning should be an ideal. This requires fieldwork and interviweing builders. Or you could just measure existing instruments and measure them.

pick one instrument. Measure root pitches. You’re good.

or do more recording like sethares. Measure more ensembles. Which partial is the root?

these guys sampled the local gamelan and went with that.

the tuning . . . Are wesure of the root pitches? Is it the instruments relative to each other, one in reference to itself, the partials in a single note?

there is an image on a grid, which is hard to see as a slide.

you can do a lot of retuning.

sumarsam is raising a point on pelog tuning. The musicologist in the group is absent so the presenters have to defer.

how to synthesize -samples or synthesis. They use sines and formlet filters.

performance modelling. Model human actors or do contextual knowledge.

they did not go with individuals.

They have an event model. Each note is an event, which hold what you need to know.

audio demo. It does tempo changes right. They use ListeningClocks to do time right. I need to look at this class. They follow each other. You can set empathy and confidence, to how much they deviate.

listening to theory

Live blogging the sc symposium

panel: listening to theory

Soumd in film makes film Real and amchors it to the real world. People infer sources of sound with visual cues.

causation – synchresis is synchronization and synthesis. Does sound exist in a vacuum? This a philosophical question. A realed question is where does sound come from?

is an echo one sound or two? Depending on what you think, your perception changes.

what about form and matter? Is it just a medium, or is it the very stuff of sound?

now we are watching a film of car traffic which looks like it might have been filmed in germany. It’s got sounds of cars and wind and birds.

but all the sounds were made in supercollider!

so what was before intentions or agency is now about algorithms and effects.

now renate wieser will speak. She did an installation called the phaedrus machine. This is related to a socratic dialog, which she is describing. Good people are reincarnated as philosphers, bad people as george bush. (These are my words, not hers.)

to practice good life and avoid a bad reincarnation, she has a video game you can play to practice looking for truth. There are sound cues if you reach truth or if you fall from it. The game is audio only and uses a verticle speaker arrangement. You do get feedback in the form of a spreadsheet at the end which described your reincarnation level.

she has another installation called ‘survival of the cutest.’ It’s a play with voices coming out of different speakers. Sc sends them to whatever channel, semi-randomly.

the excell thing with the speadsheet works because sc writes to a tab dilineated file and excell look at it from time to time.

tom hall will speak now. He’s talking about 20th century stuff. Legacy of musical modernism. What is a muscal object? Instruments vs sounds.

20th century had more math stuff in music than any time since the renaissance. Schoenburg came up with twelve tone almost a hundred yaers ago. Stravinsky took it up after schoennburg died.

stravinsky said when he composed with intervals, he was aware of them as objects. Babbitt took up the 12tone. He was up in the maximum diversity of permutations.

set class theory is an american thing. There’s some set class stuff in supercollider, though.

a set can be represented by an array. Tones are integers in equal temperament, much like midi.

he has a pitchcircle class to visualize sets.

powersets are all permutations of elements. A n size powerset will have size 2**n.

tom johnson wrote a piece called ‘chord catalog’ which sounds cool. Http://www.editions75 . . .

break for 5

platform independence

live blogging the sc symposium

Marije baalman is talking about cross platform implementation in supercollider.

sc runs on osx, linux, qindows and freenbsd. It has a language, the synth, the editor, the graphical server. There arr 10 editors. SCapp is os x only. There’s scel for emacs. Scvim is a vi plugin. Sced runs on gedit on linux only. Psychollider is in python, originally just for windows. Jsceclipse is in java. Textmate is osx only. Then there’s scfront, qcollider, squeak, etc.

os x doesn’t always mean SCapp. So we must be aware of editor issues. Scapp and psycollider have sclang inside, so documents run in the same apllication and don’t rely on pipes.

insert snarky comment about sc on linux here: it’s too hard!

the gui abstraction layer solves most of the compatibility issues. These also help with accessibility issues.

you can stick in extra menu items.

HIDs are another compatibility issue. MouseX is now cross platform. Wacom tablets are handled in an os specific way on osx, bur used as an hid by limux.

helpfiles are in html. Scel has an issue because emacs sucks and doesn’t support css with w3m and let’s be clear, this is a violation of the tsandard. The others handle html in different ways, including with a browser.

i am becoming grumpy from lack of food.

scapp uses webkit for html editing, but this creates crap html code. It’s wysiwyg, but not for other viewers. Helper and AutoHelper might be the answer to this. Or perhaps a helpfile template.

compilation issues – there are some preprocessor tags which are platform specific. Unix uses scons, which might be a good idea for os x.

audio drivers are obviously very different. Then there’s hid stuff and wii code. The wii code works on linux but not on os x, alas.

some stuff, like text to speech is os x only.

what about for end users? Audio is the same. Midi is spotty. Hids have different class interfaces, so there’s an abstraction layer. Wiis use the same class interface. I’ve been looking at fixing this btw. I’m sure somebody else will get it first though.

hid stuff has platform specifity in how hids work. Anyway. . .

there’s different installation locations for some default directories. There’s a Platform class which is good for abstraction.

if you need to check for an os, thisProcess.platform.name , gui use GUI.id , emacs use thisProcess.platform.hasFeature(emacs)

my batteries are running low. To check os specifity, look at the helpfile or source file and the location.

if you’re going to distribute, do it cross platform. Remember that key combos are different when writing helpfiles. Make your example code platform independent.

windows support for sc still sucks. But windows sucks.

machine learning panel

Panel discussion on neural stuff. Jan is speaking about self organizing maps, which is a talk he gave at brum last term.
He’s making snapshots for presets. It can be used to find similar presets. That are like ones he likes.
It creates a meta controller, which is more high level.
He can use it to make sound objects.
And it’s bewtween top down and bottom up appraoches.
He’s got a graph on how he uses it. He plays with it to make snapshots. The snapshots are fed into the som which generates simlar material, which he can use for a meta controller. He can make a map of material amd then make a path to traverse the map andthen control where he is on the the path with a slider.
Soms can be used to control anything, including each other.
A snapshot can be an array or an event. His examples use ron’s preset library.
it is an unsupervised neural network.
SynthDescLib lets you make a gui with preset. Or maybe this is jan’s code lib. There is a button to generate a som from presets in the gui. And a matrix comes up. Some of them are green, which are the ones he picked. The others are related. As you click on them it saves your path. There is a slider at the top that moves through the path. You can save your state.

now dan stowell is recapping and he has made soms as a ugen. He is showing the thng he did at the london sc meetup. It runs on the server and gets trained in advance by ana;yzing samples.

it imposes the eq of one sample onto another sample. Which works and is impressive. The som has a visua;izer. Pretty. It is not for download. I find his gui is set up kind of in reverse of how i’d think about it.

too much coffee for me. Pee break now. Ok back.

david has a flickr feed live from here

now nick collins is showing his work on the topic. He’s got an som implementation too. With a helpfile. He analyzes midi files. He breaks them up into little bits. He will release his files shortly.

now he’s talking of reenforcement learning, which is a way of considering an agent acting in the world. (See david’s photo of the slide) a state leads to an action, which in turn effects tje world which changes the state. Reenforcement learning looks at how effective actions are. So the program must have an idea of the world. This must also have a way of grading the reward of how good the world is. So you need to decide if something sounds good.

he has sc code to deal with this. LGDsarsa is on his website.

because machine learning is computationally expensive, it’s often farmed out to an external batch process. Or you can run in non rt mode. Dan has a nice ugen for this called Logger. Thete’s code examples on the mailing list. It creates data files which can then be used for machine learning.

he’s got a self similarities table for a pixies track.

ok, on to the more panelly part

how do you get the reward state in sarsa? Physiological monitoring is one way. Or you can ask the audience which has a delay, but propgate backwards. Or you can do it in a model.

jan is doing a project with thom which is similar but will generate full pieces. Nick reccomends tom mitchells book on machine learning.

why is a reward better than a rule? Why is it more interesting to train a net vs creating rules? Answer is that they can be used for different applications. Ron notes that rules are implicitly present in selection of training material and assumptions. Nick is talking about flexibility and creative machines. Dan says that ron is correct, but the number of possibi;itites in even a small data set is huge. Ron says constraints are cool. Te panel says that supercollider is cool

james, or leader, is talking about intent. What if we inverted rewards to make the audience unhappy? Nick points out it’s still hard to gauge cultural preferences.

there’s a question about specificity vs building an overly large tool. Jan agrees this is a trap. Nick says that specificity is more musically effective. He talks about hard coding. There’s too much variation sometimes.

performances with live evoluion. Using a human as a fitness fnction is slow. Nck talks abut a cmputer as an impovisor. His phd does this, whch you can download. He’s switced to midi becaus featurewards extraction is hard. Jan s talking abut having few slders.

neural network and machine learning

Live blogging the sc symposium
i showed up late for the talk on neural networks, which sucks, but i needed my coffee.
Tje speaker is demonstarting using a neural network to process gestural input from a wiimote. It makes 64 vectors describing the motion of the wimotes. He can train the neural net by making the same gesture over and over.
The auditorium speakers are making a high pitched squeal.
Now he’s talking about continious time recurrent nueral networks. These are used in robotics. They evolve instead of being trained. (Trained ones are called feet forward)
Ollie Brown did some code for this. He sugges that the smoothibg function be rep;aces with hyperbolic tans and become excitation functions and it does not reach equilibrium. You can use this for interactive evolution.
Squeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeazle.
The 60 hz hum has just caused a problem with the demo. The power to the av thing has cut out. Ron is cursing. People in the audience are whistling difference tones to go with the squeal. Somebody od making multiphonics. Now somebody is playing a sine tone on their laptop. Somebody is sampling and granularizing the feedback. The talk has paused. I wish i’d been here for the start because it’s awesome, but without coffee, i’d still have missed it.
A grad student has just come sprinting in with a cable. And the squal has ceased! Applause!
And the source of the squeal was an uniterruptible power supply. Which is why the av input died. Ohhhhhh! We are nearly back online. I wish this disaster had been at the start so i could have seem the whole thing.
The presenter, by the way is Chris Kiefer. Who is now resuming.
He is using a ctrnn to control a synth. And it can be reinitialized and mutated. This sounds cool, but i don’t under stand how it differs from random numbers. Oh, you pick ones you like and evolve from there.
Http://bit.ly/SC-NNs
http://bit.ly/SC-CTRNNS
HTTP://www.olliebrown.com/files/papers/ . . .

Twitter Supercollider App

There are some people twittering supercollider code. They do sound generating apps in 140 characters or less! I’ve just created some code to fetch and play these. It uses a yahoo pipe which looks for tweets tagged with and which seem to contain a playable piece of code. It also does some sanitizing to ignore potentially evil content.
This is a first draft, so it requires a helper script, written in bash, which is called fetch.sh:

#!/bin/bash

curl http://pipes.yahoo.com/pipes/pipe.run?_id=sqg4I0kl3hGkoIu9dPQQIA&_render=r
ss  >   /tmp/rss.xml

The SC code is:


(
r = { 


 Routine.new({
 
  var code, new_code, syn, doc, elements;
  
  inf.do({
  
   "fetching".postln;
  
   "/path/to/fetch.sh".unixCmd;
   
   doc = DOMDocument.new("/tmp/rss.xml");
   doc.read(File.new("/tmp/rss.xml", "r"));
   elements = doc.getElementsByTagName("description");
   
   
   elements.notNil.if ({
   
    new_code = elements.last.getText;
   
    ((new_code == code).not).if ({
   
     code = new_code;
   
     code.postln;
      
     (syn.notNil).if ({ syn.free; s.freeAll; });
     syn = code.interpret;
    });
   });
   60.yield;
  });
 });
 
};

)

r.play;

Replace the path information with the correct one, start the server, select all the code and hit enter. If you find a bug or a way to be evil, please leave a comment.

sc3 keynote

Live blogging sc symposium

keynote

as an aside, david has a tiny sc logo on his badge. Ha ha.

Ron has given an intro and now scott wilson and nick collins are talking about tje supercollider book, coming from mit press. It’s like the c sound book, but for supercollider.

The book is cool. You should buy it.

James McCartney is giving a keynote about single sample code synthesis. He did a 1 sample at a time server in 2001. Synthdefs were c functions. The code is lying around on his website. It doesn’t work and there are missing pieces and is not the same version that he’s talking about.

The current version does block processing. It does a bunch of samples at once. ChucK does single sample, but most do block.

The single sample version of sc has lower performance. And compiling synths took too long. This might not be the case today. It had the same architecture as the current version. There was no distinction between audio rate and control rate, since everything gets evaluated on every sample. You could do one sample feedback.

The whole thing was written in sc. It made c++ code for synthdefs, which was then compiled. He’s showing us what the code looks lile. It looks like c++. Actually, it looks like source for ugens now.

The sc code for ugens returns strings for code generation. This doesn’t need primatives, because the c code is in the sc class. But it’s still c code. This would really not be easier to write, since you wou;d need to know tje structure of the generated code. It would have been beyter to have meta code to describe how the ugen should work.

He’s showing us the source code for his project and now going to tell us why this is a bad idea. This is an unusual keynote. Now he’s telling us about memory issues and registers. Now he’s talking about vectorization and optimation. Now there’s a slid called ‘code pointer swapping.’ Now ‘instruction cacahe’

It might be ok to do single samples once in a while despite performance issues.

There’s a question about faust. He says its interesting and good.

Q: demand ugens can do single sample feedback as a hack, but its inefficient. Te solution is to write your own ugens in c because non block stuff is slow.

Another faust question. Functional programs parallelize better. But faust has no variable names but prefer not to. It makes me a bit dizzy, faust does.

Ron is asking a question about demand rate and jit code and other things that i don’t know how to use. Now ron is asking about synths that change rate on the fly. It’s hard.

Can you set the block size to 1 on sc now? Yes, but then you have chuck.