Renate Wiesser and Julian Rohrhuber: Meaning without Words

Last conference presentation to live blog from the sc symposium
A sonification project. Alberto de Campo is consulting on the project.
A project 7 years in the world, inspired bya test from the 70’s. You can distinguish educated and uneducated background based on how they speak. Sociologists picked up on this. There was an essay about this, using Chomsky’s grammar ideas. Learning grammar as a kid may help with maths and programming. Evidence of how programmers speak would seem to contradict this . . .
But these guys had the idea of sonifying grammar and not the words.
Sapir-Whorf: how much does language influence what we think. This also has implications for programming languages. How does your medium influence your message?
(If this stuff came form the 70’s and was used on little kids, I wonder if I got any of this)
Get unstuck form hearing only the meaning of words.

Corpus Linguistics

don’t use grammar as a general rule: no top down. Instead use bottom up! Every rules comes with an example. Ambiguous and interesting cases.

Elements
  • syntax categories – noun phrases, prepositional phreases, verb phrases. These make up a recurive tree.
  • word positon: verb, nouns, adverb
  • morphology: plural singular, word forms, etc
  • function: subject object predicate. <– This is disputed

The linguistics professor in the audience says everything is disputed. “We don’t even know what a word is.”
They’re showing an XML file of “terminals,” words where the sentence ends.
They’re showing an XML file of non-terminals.
Now a graph of a tree – which represents a sentence diagram. How to sonifiy a tree? There are several nodes in it. Should you hear the whole sentence the whole time? The first branch? Should the second noun phrase have the same sound as the first, or should it be different because it’s lower in the tree?
Now they have a timeline associated with the tree.
they’re using depth first traversal.
Now the audience members are being solicited for suggestions.
(My though is that the tree is implicitly timed because sentences are spoken over time. So the tree problem should reflect that, I think.)
Ron Kuivila is bringing up Indeterminacy by John Cage. He notes that the pauses have meaning when Cage speaks slowly. One graph could map to many many sentences.
Somebody else is recommending an XML-like approach with only tags sonified.
What they’re thinking is – chord structures by relative step. This is hard for users to understand. Chord structures by assigning notes to categories. They also though maybe they could build a UGen graph directly from the tree. but programming is not language. Positions can be triggers, syntax as filters.
Ron Kuivila is suggesting substituting other words: noun for noun, etc, but with a small number of them, so they repeat often.
They’re not into this, (but I think it’s a brilliant idea. Sort of reminiscent of aphasia).
Now a demonstration!
Dan Stowell wants to know about the stacking of harmonics idea. Answer: it could lead to ambiguity.
Somebody else is pointing out that language is recursive, but music is repetitive.
Ron Kuivila points out that the rhythmic regularity is coming from the analysis rather than from the data. Maybe the duration should come how long it takes to speak the sentence. The beat might be distracting for users, he says.
Sergio Luque felt an intuitive familiarity with the structure.

Martin Carlé / Thomas Noll: Fourier-Scratching

More live blogging
The legacy of Helmholtz.
they’re using slow fourier transforms instead of fft. sft!
they’re running something very sci-fi-ish, playing FM synthesis. (FM is really growing on me lately.) FM is simple and easy, w only two oscillators, you get a lot of possible sounds. They modulate the two modulators to forma sphere or something. You can select the spheres. They project the complex plane on the the sphere.
you can change one Fourier thing and it changes the whole sphere. (I think I missed an important step here of how the FM is mapped to the sphere and how changing the coefficients back to the FM.)
(Ok, I’m a bit lost.)
(I am still lost.)
Fourier scratching: “you have a rhythm that you like, and you let it travel.”
Ok the spheres are in fourier-domain / time-domain paris. Something about the cycle of 5ths. Now he’s changing the phase of the first coefficient. Now there are different timbres, but the rhythm is not changing.
(I am still lost. I should have had a second cup of coffee after lunch.)
(Actually, I frequently feel lost when people present on maths and the like associated with music. Science / tech composers are often smarter than I am.)
you can hear the coefficients, he says. There’s a lot of beeping and some discussion in german between the presenters. The example is starting to sound like you could dance to it, but a timbre is creeping up behind. All this needs is some bass drums.
If you try it out, he says, you’ll dig it.
Finite Fourier analysis with a time domain of 6 beats. Each coefficient is represented by a little ball and the signal is looping on the same beat. The loops move on a complex plane. The magnitude represents something with fm?
the extra dimension from Fourier is used to control any parameter. It is a sonfication. This approach could be used to control anything. You could put a mixing board on the sphere.
JMC changed the definition to what t means to exponentiate.
Ron Kuivila is offering useful feedback.

Alo Allik: Audiovisual Composition with Three-Dimensional Continuous Cellular Automata

Still live blogging the supercollider symposium
f(x) – audio visual performance environment, based on 3d cellular automata. Uses objective X, but he audio is in scserver.
the continuous cellular automata are values between 0 and 1. The state at the next time step is determined by evaluating the neighbours + a constant. Now, a demo of a 1-d world of 19 cells. All are 0 except for the middle which is 1. Now it’s chugging a long. 0.2 added to all. The value is modulus 1, to just get the fractional part. Changing the offset to 0.5, really changes the results. Can have very dramatic transitions, but with very gradual fades. The images he’s showing are quite lovely. and the 3d version is cool
Then he tried changing the weight of the neighbours, This causes the blobs to sort of scroll to the side. The whole effect is kind of like rain drops falling in a stream or in a moving bit of water in the road. CAn also change the effect by changing the add over time.
Now he’s demoing his program and has allowed us to download his code off his computer. Somehow he’s gotten grids and stuff to dance around based on this. “The ‘World’ button resets the world.” Audience member: “Noooo!”
Now an audio example, that’s very clearly tied in. Hopefully this is in the sample code we downloaded. It uses the Warp1.ar 8 times.
This is nifty. Now there’s a question I couldn’t hear. Alo’s favourite passtime is to invent new mappings. He uses control specs on data from the visual app. There are many many cells in the automata, thus he polls the automata when he wants data and only certain ones.
More examples!

Julian Rohruber: Introducing Sonification Variables

More sc symposium live blogging

sonification

Objectivity is considered important in the sciences. The notions of this have changed quite a bit over the last 50 years, however. The old style of imaging has as much data as possible crammed in, like atlas maps. Mechanical reproduction subsequently becomes important – photos are objective. However, perception is somewhat unreliable. So now we have structural objectivity which uses logic + measurements.
We are data-centric.
What’s the real source of a recording? The original recording? The performer? The score? The mind of the composer?
Sound can be just sound, or it can just be a way of conveying information or something in between. You need theory to understand collected data.
What do we notice when we listen that we wouldn’t have noticed by looking? There needs to be collaboration. Sonification needs to integrate the theory.
In sonfication, time must be scaled. There is a sonification operator that does something with maths. Now there are some formulas on his slide, but no audio examples.
Waveshaping is applying one function to another.
Theoretical physics. (SuperCollider for SuperColliders.) Particles accelerate and a few of them crash. Electrons and protons in this example. There’s a diagram with squiggly lines. Virtual photons are emitted backwards in time? And interacts with a proton? And something changes colour. There’s a theory or something called BFKL.
He’s showing an application that’s showing an equation and has a slider, and does something with the theory, so you can hear how the function would be graphed. Quantum Mechanics is now thinking about frequencies. Also, this is a very nice sounding equation
Did this enable discover anything? No, but it changed the conceptualisation of the theory, very slightly.
apparently, the scientists are also seeking beauty with sonification, so they involve artists to get that?
(I may be slightly misunderstanding this, I was at the club event until very late last night (this morning, actually).)
Ron Kuivila is saying something meaningful. Something about temporality, metaphilosophics, enumeration of state. Sound allows us to hear proportions w great precision, he says. There may be more interesting dynamical systems. Now about linguistics and mathematics and how linguistics help you understand equations and this is like Red Bird by Trevor Wishart.
Sound is therefore a formalisation.

Miguel Negrão: Real time wave field synthesis

Live blogging the sc symposium. I showed up late for this one.
He’s given a summary of the issues of wave field synthesis (using two computers) and is working on a sample accurate real time version entirely in supercollider. He has a sample-accurate version of SC, provided by Blackrain.
The master computer and slave computer are started at unknown times, but synched via an impulse. The sample number can then be calculated, since you know how long it’s been since each computer started.
All SynthDefs need to be the same on both computers All must have the same random seed. All buffers must be on both. Etc. So he wrote a Cluster library that handles all of this, making two copies of all in the background, but looking just like one. It holds an array of stuff. Has no methods, but sends stuff down to the stuff it’s holding.
Applications of real time Wave Field Synthesis: connecting the synthesis with the place where it is spatialized. He’s ding some sort of form of spectral synthesis-ish thing. Putting sine waves close together, get nifty beating, which creates even more ideas of movement, The position of the sine wave in space, gives it a frequency. He thus makes a frequency field of the room. When stuff moves, it changes pitch according to location.
This is an artificial restriction that he has imposed. It suggested relationships that were interesting.
the scalar field is selected randomly, Each sine wave oscillator has (x, y) coords. The system is defined by choosing a set of frequencies, a set of scalar fields and groups of closely tunes sine wave oscillators. He’s used this system in several performances, including in the symposium concert. That had maximum 60 sine waves at any time. It was about slow changes and slow movements.
His code is available http://github.com/miguel-negrao/Cluster
He prefers the Leiden WFS system to the Berlin one.

Julian Rohrhuber: <<> and <>> : Two Simple Operators for Composing Processes at Runtime

Still Live blogging the SC symposium
A Proposal for a new thing, which everybody else here seems to already know about.

NamedControl

a = { |freq = 700, t_trig = 1.0| Decay.kr(t_trig) * Blip.ar(freq) * 0.1}.play

becomes

a = { Decay.kr(trig.tr) * Blip.ar(freq.kr(400) * 0.1}.play;
a.set(trig . . .

JITLib

Proxy stuff. (Man, I learned SC 3.0 and then now there’s just all this extra stuff in the last 7 years and I should probably learn it.)

ProxySapce.push(s);
~out.play;
~out = {Dust.ar(5000 ! 2, 0.01) };
~out.fadeTime = 4

a = NodeProxy(s);
a.source =  {Dust.ar(5000 ! 2, 0.01) };

Ndef(x, . . .)

(there are too many fucking syntaxes to do exactly the same thing. Why do we need three different ones? Why?!!)

Ndef(x, { BPF.ar(Dust.ar(5000 ! 2, 0.01)) }).play;

Ndef(x, { BPF.ar(Ndef.ar(y), 2000, 0.1)}).play;
Ndef(y, {Dust.ar(500)})

. . .

Ndef(out) <<> Ndef(k) <<> Ndef(x)

does routing
NdefMixer(s) opens a GUI.
Ron Kuivila asks: this is mapping input. Notationally, you could pass the Ndef a symbol array. Answer: you could write map(map(Ndef(out), in, Ndef(x) . . .
Ron says this is beautiful and great.
Ndef(comb <<>.x nil //adverb action
the reverse syntax just works form the other direction.
Ndefs can feedback, but everything is delayed on block size.

Hanns Holger Rutz: ScalaCollider

Live blogging the sc symposium
What’s the difference between high level and low-level
Why should computer music have a specialised languages?
In 2002, JMC rejected the GPL languages that he considered, because they didn’t have the features he needed. But OCaml, Dylan, GOO and Ruby seemed good candidates, which are OOP + FP. They have dynamic typing.
There are a lot of languages that talk to Sc Server now. He has a table of several languages and the libraries which extend them to supercollider.
Are they dynamically types or static? Object oriented? Functional? Do the extension libraries handle UGen graphs? Musical Scheduling? Do they have an interactive mode? (All do but Java and Processing.) A domain-specific gui?
And now a slide of UGen graphs in a bunch of other languages. ScalaCollider is virtually identical to SuperCollider
What’s the Scala Language? Invented in 2003 by a Swiss Scientist, Martin Odersky at EPFL. Has a diverse community of users. It’s a pragmatic language Scala = Scalable language. It draws from Haskell and OCaml, but has java-like syntax. Runs on top of JVM (or .Net). Thus it is interoperable with java and is cross-platform. IS both OOP and FP.
Type at the prompt, “scala” and it opens an interpreter window.

def isPrime(n: Int) = (2 until n) forall (n % _ != 0)
isPrime: (n: Int)Boolean

If you type “isPrime(3.4)” you get a type error.

def test(n: Float) = isPrime(n)

Also causes a type error
There are also lazy types. Has different names for stuff than sc, but many of the same concepts.
Scala does not easily allow you to add methods to existing classes. You use a wrapper class. You need to do explicitly class conversions. However, there is a way to tell the interpreter that there’s a method to do class conversions.
You want to pick a language that will still have a user base in 10 years. Ho o predict that? fun tricks with statistics. Scala is less popular than fortran or forth. It’s very well designed, though. You can also poll communities in what they think about the language. Users find it expressive, good at concurrency, people like using it, good for distributed computing, reusable code, etc. Downsides is that there’s not a lot of stuff written in it.
http://github.com/Sciss/ScalaCollider . http://github.com/Sciss/ScalaColliderSwing
The Swing thing just opens a development environment, which doesn’t have a way to save documents. Really not yet ready for prime time.
Side effect-free ugens are removed automatically from SynthGraphs
ScalaDoc creates javadoc-like files describing APIs.
Now there’s some code with a lot of arrows.
The class Object has 278 methods, not even counting quarks. Subclasses get overwhelmed. Scala’s base object java.lang.Object has only 9 methods.
Scala has multiple inheritance.
(Ok, this talk is all about technical details. The gurus are starting to make decisions about SC4, which will probably include SuperNova server and might switch to Scala Collider and this talk is important for that. However, ScalaCollider is not yet fixed and may or may not be the future of SC, so it’s not at all clear that it’s worthwhile for average users to start learning this, unless, of course, you want to give feedback on the lang, which would make you a very useful part of the SC community. So if you want to help shape the future of SC, go for it. Otherwise, wait and see.)
Latency may be an issue, plus there’s not realtime guarantees. In practice, this ok. The server handles a lot of timing issues. The JIT might also cause latency. You might want to pre-load all the classes.
In conclusion, this might be the future. Sclang is kind of fragmented, because classes can’t be made on the fly, some stuff is written in C, etc. In Scala, everything is written in Scala, no primitives, but still fast

Thor Magnusson: ixi lang: A SuperCollider Parasite for Live Coding

Summer project: impromptu client for scserver. Start the server, the fire up impromptu, which is a live coding environment. Start it’s server and tell it to talk to scserver. It’s a different way of making music. To stop a function, you re-define it to make errors.
Impromptu 2.5 is being released in a few days as will Thor’s library on the ixi website.
Now for the main presentation. He has a logn standing interest in making constrained system, for example, using ixi quarks. These are very cool. He has very elaborate guis, modelling predator/prey relationships to control step sequencers. His research shows that people enjoy constraints as a way to explore content.
He’s showing a video taking the pis out of laptop performances, which is funny. How to deal with laptop music: VJing provides visuals. NIME – physical interface controllers. or Live Coding. Otherwise, it’s people sitting behind laptops.
ixi lang is an interpreted language that can rewrite it’s own code in real time that has the the power to access sc
It takes a maximum of 5 seconds of coding to make noise. Easy for non programmers t use. Understandable for the audience. The system has constraints as it has easy features.
Affordances and constraints are two sides of the same coin. “Affordance” is how something is perceived as being usable.
composing an instrument has both affordances and constrains.
ixi lang live coding window. There are 3 modes.


agent1   -> xylo[1  5  3  2]

spaces are silences, numbers are notes. instrument is xylophone

scale minor
agent1   -> xylo[1  5  3  2] + 12

in minor an octave higher.
“xylo” is a synthdef name

SynthDef(berlin{ . . . .}).add;


scale minor
agent1   -> berlin[1  5  3  2] + 12/2

Can add any pbind-ready synthdef. multiply and divide change speed


agent1   -> xylo[1  5  3  2]
agent1))

increases amplitude of agent1

percussive mode

ringo -> |t b w b |

can do crazy pattern things
letters correspond to synthdefs, there is a default library

sos -> grill[2 3 5 3 ]

Using pitch shifted samples

Concrete mode

ss -> nully{ 1  3 4  6 6 7 8 0    }

0 is silence

tying it together

rit -> | t  t  t ttt  |
ss ->|ttt t t t     |
sso -> | t t t t    t   t|^482846

>shift ss 1

shake ss
up ss
yoyo ss 
doze ss

future 4:12 >> shake ss

group ringo -> rit ss sso

shake ringo

(um, wow. I think I will try to teach this, if I can get a handle on it fast enough.)

ss -> | o   x  o  x|
xxox -> | osdi f si b b i|!12

xxox >> reverb

mel -> wood[1 5 2 3 ]
xo -> glass[32 5 35 46 3] +12

xo >> distort >> techno

shake mel

snapshot -> sn1

snapshop sn1

future 3:4 >> snapshot

scalepush hungarianMinor


suicide 20:5

The suicide function gives it an a percentage chance of crashing every b time
The satisfaction survey results of users is very high. Some people found it too rigid and others thought it was too difficult. Survey feedback is 1% of users.
www.ixi-audio.net
This makes live coding faster and understandable. You can put regular sc code in the ixi lang docs. good educational tool. can be used by children. successful experiment for a very high level live coding project.
You can easily add audio plugins. The lang is very extendable.

Richard Hoadley: Implementation and Development of Interfaces for Music Generation and Performance though Analysis of Improvised Movement and Dance

Still liveblogging the sc symposium. This speaker is now my colleague at Anglia Ruskin. He also did a poster presentation on this at AES, iirc

Small devices, easily portable. Appearance and design effect how people interact. Dancers not so different than regular people.
He makes little arduino-powered boxes with proximity detectors. This is not new tech, but is just gaining popularity due to low cost and ease of use.
He’s got a picture up called “gaggle” which has a bunch of ultrasonic sensors. The day before the event at which is was demonstrated, the developers were asked if they wanted to collaborate with dancers. (It’s sort of theremin-esque. There was actually a theremin dance troupe, back in the day and I wonder if their movements looked similar?) The dancers in the video were improvising and not choreographed. They found the device easy to improvise with. Entirely wireless access for them lets them move freely.
How do sounds map to those movement? How nice are the sounds for the interactors (the dancers)?
Now a video of somebody trying the thing out. (I can say from experience that the device is fun to play with).
He’s showing a picture of a larger version that cannot be packed on an airplane and plans to build even bigger versions. HE’s also showing a version with knobs and buttons – and is uncertain whether those features are a good or bad idea.
He also has something where you touch wires, called “wired”. Measures human capacitance. You have to be grounded for it to work. (Is this connected electrically to a laptop?) (He says, “it’s very simple.” and then supercollider crashed at that instant.)
The ultrasound things is called “gaggle” and he’s showing the sc code. The maximum range of the sensor is 3 metres. In the gui he wrote allows for calibration of the device. How far away is the user going to be. How dramatic will the response be to a given amount of movement?
You can use it to trigger a process when something is in range, so it doesn’t need to react dumbly. There is a calibration for “sudden”, which responds to fast, dramatic movements. (This is a really great example of how very much data you can get from a single sensor, using deltas and the like.)
Once you get the delta, average that.
Showing a video of dancers waving around podium things like you see in art museums.
Now a video of contact dancing with the podiums. There’s a guy with a laptop in the corner of the stage. It does seem to work well, although not as musically dramatically when the dancers do normal dancey stuff without waving their arms over the devices, which actually looks oddly worshipful in a worrying way.
Question: do dancers become players, like bassoonists or whatever? He thinks not because the interactivity is somewhat opaque. Also, violinists practice for years to control only a very few parameters, so it would take the dancers a long time to become players. He sees this as empowering dancers to further express themselves.
Dan Stowell wants to know what the presenter was doing on stage behind the dancers? He was altering the parameters with the GUI to calibrate to what the dancers are doing. A later version uses proximity sensors to control the calibration of other proximity sensors, instead of using the mouse.
Question: could calibration be automated? Probably, but it’s hard.

Daniel Mayer: miSCellaneoud lib

still liveblogging the SC symposium
His libs. VarGui: multi-slider gui. HS (HelpSynth) HSPar and related.
LFO-like control fo synths, generated by Pbinds
Can be discrete or continuous – a perceptual thing in the interval size.
Discrete control can be moved towards continuous by shortening the control interval.

Overview

Can do direct LFO control. Pbind-generated synths that read from or write to control busses.
Or you can do new values per event, which is language only or put synth values in a Pbind.

Pbind generated synths

Write a synthdef that reads from a bus. Write a synth that writes to a bus. Make a bus. Make a Pbind

Pbind(
 instrument, A1,
 dor, 0.5,
 pitchBus, c
)

Ok, w his lib, make a sequence of durations. Starts the synths. Get the values at the intervals with defined latency. The values are sent back to the language, which has more latency. Then you have a bunch of values that you can use. If you play audio with it, there is yet another layer of latency.

h = HS(s, {/* usegn graph*/});

p = PHS(h, [], 0.15 [ /*usual Pbind def*/ ]).play

. . .
p.stop; // just stops the PHS
p.stop(true); // also stops the HS

or

// normal synth
..
.
.

(
p = PHS(h, [], 0.2, [/* pdind list*/]).play(c, q)

PHS is a PHelp Synth *new (helpSynth, helpSynthArgs, dur1, pbdindData1 . . .durN, pbindDataN)
PHSuse has a clock
PHSpar switches between two patterns.
(I do not understand why you would do this instead of just use a Pbind? Apparently, a this is widely used, so I assume there exists a compelling reason.)
download it from http://www.daniel-mayer.at
Ah, apparently, the advantage is that you can easily connect ugens to patterns, as input sources w the s.getSharedContol(0)