I read too much BBCut documentation and got a handle on basic functionality well enough to teach it. That ate a lot of time. Then I got a reasonable draft of a new piece, which is, of course, not finished because everything could be better.
The new term started, so I had to treck up to Brum for the first meeting, which, actually, I thought was going to be more formal, or I might have skipped it. Meanwhile, I was quickly trying to tear through a 200 page book of critical theory about noise music, so I could give a good lecture.
Then, right away, we had a BEAST weekend. I came to Brum on Thursday evening to help rig speakers, but I showed up late and they got chucked out early, so I just went to the pub. Then I slept on Eric’s laminate floor and was up bright and early the next morning to help finish off the rigging. then I went to do fun things like pay my fees and talk to somebody at student records about having “Ms” as my title in the computer system. That last one caused some giggles from the person behind the desk.
Then, a afternoon concert at the Barber Centre, on campus. Immediately afterwards, we de-rigged and packed up all the speakers and put them onto a truck, along with about a hundred other speakers and took them over to the CBSO Centre. Somebody got the idea that we could so large, multi-channel systems at two different venues.
My bike has a flat tyre (AGAIN), so I rode the train into town. Or tried to, I waited more than 45 minutes in the rain just to buy tickets. And then more rigging! James now works for the CBSO and has keys to the building, so they didn’t throw us out at closing time. So we put up 90 channels of speakers and ran cables and the like late into the night. And then went to the pub. I spent the next night in a spare room at Shelly’s house, which had a bed in it! Yay!
And then back the next morning to tape down all the wires. There were 3 concerts Saturday night. And then we went to the pub.
Sunday just had an afternoon concert, that was possibly long enough to have been two concerts. And then we packed up all the speakers and all the cable and put it back onto trucks. This went shockingly quickly. Then we went to the pub. And then to curry. And then back to the pub. I got drunk enough where I kept asking Eric if he wanted to see my scars. The scars that are just rings around my nipples. He refused. And then, I thought it would be a good idea to break out my hip flask while walking back to where I was staying. (I think I might stop carrying it around, as I’d had the same idea after Sam’s birthday party and probably drank as much alcohol on the way home that I’d drank at the party. Not that I needed more.)
So the next morning, Monday, I showed up rather late to unload the trucks and put everything back into storage. But it still got done really quickly and then we all went for coffee in the Senior Common Room. This is an area with sofas that sells caffeinated beverages and pre-made sandwiches. I think the drip coffee there could be used as diesel fuel, in a pinch.
There’s a sort of amazing moment I noticed last time, when we go from being a team with a shared experience to just back to normal life. Like, this moment of togetherness that dissipates as people go to sleep it off or have meetings or whatever. I wish I could make a piece of music that does that somehow. This time, though, I missed that moment, as I had to go meet Scott, my supervisor.
I played him the piece that I declared done, and he had some good suggestions for how to change it. Bah. And then I played him my newer piece and he had many more suggestions for that. Since it’s just at a stable draft (good enough to try out at a gig, sort of stable draft), I expected those. Then, huzzah, he told me I could put some improv in my portfolio, so I might throw in some stuff from my last Noise=Noise gig. I really miss improvising and if I could get into a duo or something, that would be ace.
On the train home, I read many more pages of the Noise book and then logged into facebook and saw Mitch had posted his UK phone number. And I saw it was the 18th and thus his birthday! So I texted him and made arrangements to meet, for after I got Xena back from Sam. Xena was very happy an has been more energetic and spry since I’ve had her back. Clearly exposure to playful puppies is good for her.
Mitch and I went for curry on brick lane and had plans to go on to an improv show, but the curry went too late. It’s funny, because I tend to ask inappropriate questions and for whatever reason, people tend to answer them. But Mitch, who I’ve known for 17 years now, can seamlessly dodge such questions and change the subject through subtle slight of hand. Which is wise of him, and also funny.
On Tuesday, I collected audio samples for my lecture and got through most of the rest of the book. Then, I went to go to a SuperCollider meeting, but failed to find the meeting and so went home and worked more. That makes it the only day in the last week, where I did not drink any pints.
Wednesday, I woke up at 7-something to get out the door by 8:20 to get the train to Cambridge. I read more on the train and then in the few minutes before class. Last minute cramming, ahoy.
I talked a lot about transgender musicians, specifically Genesis P-Orridge. I could have done a much better job, I think. I was way short of sleep and some of the materials I read had wrong-pronouned him/her and so I started off by calling him/her, “he” instead of “s/he.” Meh, what’s wrong with me? Then I talked about Terre Thaemlitz, who I’m pretty sure goes by “he” and kept the digression of the crappiness of his “anti-essentialist” identity to a minimum. And then I talked about Venison Whirled, the band of Lisa Cameron, who is a transsexual woman from Austin who does noise music. I don’t think she’s really known outside of the Austin scene, but I figure binary-IDed trans people have a place in noise too. And I did all of this without disclosing, which, I dunno, I probably should have, since it was definitely sub-theme for the day.
Then, I got on the train to Brum and wrote a slide presentation about TuningLib, my SuperCollider quark, got to uni and then presented it. Scott noticed an error in one of my synthdefs in my sound example and then suggested I fix it in the piece. Which I had counted as done. (It’s not just changing a line of code, it’s re-recording the output and then re-mixing, etc etc etc). *sob* It was doooone. So I guess I have even less finished than when I started the day. And then we went to the pub and I drank a couple of pints without having eaten properly. Wheeee.
Today I woke up at noon and was able to resist feeling guilty about not working for about 3 hours. Not that I started doing work ater that. I’ve been dedicatedly faffing (mostly), but feeling bad about it.
In other news, I think I’ve fixed the problem with my phone that was draining the battery away. I ran top and noticed that the RSS reader was eating a ton of CPU. It’s, apparently, part of the OS, so attempts to kill it didn’t help. I finally blew away the preferences folder and it seems to be sorted out. My calendar, however, is still screwed up. I’ve discovered that it just never deletes anything. So if I schedule something to be every tuesday for the next 3 years and then move it to a wednesday, it keeps both versions. I don’t know yet if this is a problem with the phone or the free service I’m using to link it to Google Calendars. I so don’t have time to debug my sodding phone.
Anyway, today Mitch is done with his work in town, so we’re going to hang out. And do something, but I don’t know what.
And that’s most of what’s happened in my life except the stuff that I can’t mention on the public internet. Alas, none of the unmentionable stuff includes nudity.
Tag: celesteh
Stupid BBCut Tricks
I’ve been messing a out with the BBCut Library and will shortly be generating some documentation for my students. In the mean time, I give you some commented source code and the output which it creates. In order to play at home, you need a particular sample.
( var bus, sf, buf, clock, synthgroup, bbgroup, loop, group, cut1, cut2, cut3, stream, pb, cut4, out; // this first synth is just to play notes SynthDef(squared, { |out, freq, amp, pan, dur| var tri, env, panner; env = EnvGen.kr(Env.triangle(dur, amp), doneAction: 2); tri = MantissaMask.ar(Saw.ar(freq, env), 8); panner = Pan2.ar(tri, pan); Out.ar(out, panner) }).add; // a looping buffer player SynthDef(loop, { |out = 0, bufnum = 0, amp = 0.2, loop=1| var player; player = PlayBuf.ar(2, bufnum, 2 * BufRateScale.kr(bufnum), loop: loop, doneAction:2); Out.ar(out, player * amp); }).add; // groups synthgroup= Group.head(Node.basicNew(s,1)); // one at the head bbgroup= Group.after(synthgroup); // this one comes after, so it can do stuff with audio // from the synthgroup bus= Bus.audio(s,1); // a bus to route audio around // a buffer holding a breakbeat. The first argument is the filename, the second is the number of // beats in the file. sf = BBCutBuffer("sounds/drums/breaks/hiphop/22127__nikolat__oldskoolish_90bpm.wav", 16); // a buffer used by BBCut to hold anaylsis buf = BBCutBuffer.alloc(s,44100,1); // The default clock. 180 is the BPM / 60 for the number of seconds in a minute TempoClock.default.tempo_(180/60); // BBCut uses it's own clock class. We're using the default clock as a base clock= ExternalClock(TempoClock.default); clock.play; // Where stuff actually happens Routine.run({ s.sync; // wait for buffers to load // start playing the breakbeat loop = (instrument:loop, out:0, bufnum: sf.bufnum, amp: 0.5, loop:1, group:synthgroup.nodeID).play(clock.tempoclock); /* That's an Event, which you can create by using parens like this. We're using an event because of the timing built in to that class. Passing the clock argument to play means that the loop will always start on a beat and thus be synced with other BBCut stuff. */ // let it play for 5 seconds 5.wait; // start a process to cut things coming in on the bus cut1 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), BBCutProc11(8, 4, 16, 2, 0.2)).play(clock); /* We use a cut group to make sure that the BBCut synths get added to the bbgroup. This is to make sure that all the audio happens in the right order. CutStream1 cuts up an audio stream. In this case, from our bus. It uses a buffer to hold analysis data. BBCutProc11 is a cut proceedure. The arguments are: sdiv, barlength, phrasebars, numrepeats, stutterchance, stutterspeed, stutterarea * sdiv - is subdivision. 8 subdivsions gives quaver (eighthnote) resolution. * barlength - is normally set to 4 for 4/4 bars. If you give it 3, you get 3/4 * phrasebars - the length of the current phrase is barlength * phrasebars * numrepeats - Total number of repeats for normal cuts. So 2 corresponds to a particular size cut at one offset plus one exact repetition. * stutterchance - the tail of a phrase has this chance of becoming a repeating one unit cell stutter (0.0 to 1.0) For more on this, see the helpfile. And we play it with the clock to line everything up */ // wait a bit, so the BBCut2 stuff has a time to start 2.wait; // change the output of the looping synth from 0 to the bus, so the BBCut buffer // can start working on it loop.set(out, bus.index); // let it play for 5 seconds 5.wait; // start another BBCut process, this one just using the sound file. cut2 = BBCut2(CutBuf3(sf, 0.3), BBCutProc11(8, 4, 16, 2, 0.2)).play(clock); // We use CutBuf instead of CutStream, because we're just cutting a buffer // stop looping the first synth we started loop.set(loop, 0); cut1.stop; 10.wait; // To add in some extra effects, we can use a CutGroup group = CutGroup(CutBuf3(sf, 0.5)); cut3 = BBCut2(group, BBCutProc11(8, 4, 16, 2, 0.2)).play(clock); // play is straight for 5 seconds 5.wait; // add a couple of filters to our cutgroup group.add(CutMod1.new); group.add(CutBRF1({rrand(1000,5000)},{rrand(0.1,0.9)},{rrand(1.01,1.05)})); 10.wait; // we can take the filters back off group.removeAt(2); group.removeAt(2); // we can use BBCut cut proceedures to control Pbinds stream = CutProcStream(BBCutProc11.new); pb = Pbindf( stream, instrument, squared, scale, Scale.gong, degree, Pwhite(0,7, inf), octave, Prand([2, 3], inf), amp, 0.2, sustain, 0.01, out, 0, group, synthgroup.nodeID ).play(clock.tempoclock); // the stream provides durations 10.wait; // We can also process this is like we did the loop at the start pb.stop; pb = Pbindf( stream, instrument, squared, scale, Scale.gong, degree, Pwrand([Pwhite(0,7, inf), rest], [0.8, 0.2], inf), octave, Prand([3, 4], inf), amp, 0.2, sustain, 0.01, out, bus.index, group, synthgroup.nodeID ).play(clock.tempoclock); cut4 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), SQPusher2.new).play(clock); // SQPusher2 is another cut proc 30.wait; cut3.stop; 5.wait; cut2.stop; 1.wait; cut4.stop; pb.stop; }) )
Podcast: Play in new window | Download
Writing my Legislators
Dear Honourable Hancock and Skinner,
I’m writing about anti-LGBT bullying in California schools. It was with great distress that I read about the suicide of Seth Walsh in Tehachapi. As you probably know, he was a victim of daily bullying at school by homophobic classmates.
Suicides rates of LGBT youth in California are unacceptably high and bullying is often a factor in these deaths. I hope that the State of California will take action to ensure that schools take bullying seriously and take steps to stop it. I hope also that schools will give the message to LGBT students and their peers that LGBT people are valuable members of society.
LGBT kids need to be safe in small towns as much as they are in big cities. It is not good enough to have a few safe areas, like the Bay Area. Kids in small towns are more isolated and need as much or more protection from harassment and bullying.
Thank you for your time.
Sincerely,
Charles Hutchins
I think national action should be taken on this, but also local and state. You can find out who your CA state legislators are at http://www.leginfo.ca.gov/yourleg.html. When you write, include your home address, so they know you live in their district.
Life and stuff
I’ve been kind of busy as of late. So this is just a list of things going on.
I had the follow up with my surgeon a few weeks ago. She was very pleased. And I had my second appointment at CHX for my next referral. I suspect the wait on getting an appointment may stretch into years, but we’ll see.
Paula’s father died, after a long illness. I went with her to the funeral, which was in a Saxon church in a picturesque village in Sussex. It was a bit dramatic.
I went with her also, in what seemed like the last day of summer, to the beach in Brighton. It’s been so long since I’ve been swimming that I found the elastic in my trunks has died of old age. (I’ll fix them in my copious free time). I applied sunblock and then lay shirtless in the sun, then went swimming in the channel. It was lovely.
Fossbox put on an openday, for software freedom day. We got a grant for a training suite, so we got some laptops and then put ubuntu on all of them and set them up to demo stuff. People ca e and we got good feedback.
I got a job as an hourly paid lecturer at Anglia Ruskin uinversity in Cambridge. I’m teaching a course on electronica. It is eating a lot of time. The target is 10 hours prep time per week, but I went way over that for last week, which was the first class.
Then, the day after my teaching debut, I got on a plane and went to Berlin for the SuperCollider symposium. First use of my new passport. OMG, the privilege! I breeze through border checks. Nobody looks askance. It’s amazing.
I brought my bicycle with me. Every morning, I chanted to myslef “bike on the RIGHT” but I still went the wrong way once. Cycling in Berlin is way better than London. Not just because of the wise cycle lanes and large numbers of segregated cycle paths, but because cars are actually looking out. There are bikes everywhere in Berlin. It’s great. Rent is still cheap. The arts scene is still thriving. I still am thinking about moving there.
At one of the concerts in the symposium, a familiar looking woman sat down next to me. It was Thea, Ellen’s friend who moved from Berkeley to Berlin last April! We met up later. She has an artist visa, which is apparently easy to get. She told me people only need to work part time and can spend the rest of their time composing.
I also saw Jörg, who is building mad-scientist – like devices in his amazing studio. He introduced me to his friend, who was also at the symposium. In Berlin, there are weekly sc users group meetings, which is amazing.
The concerts were mostly pretty good. The installations were incredible. There was a club night, where the sc-using DJs got to do their thing. I left at 3am. It went until 5. I think it was the loudest thing I have ever been to in my life. The subwoofers were making my nipples hurt.
Between my bike, my laptop and the book I’m plowing through, my bags were pretty heavy. The first day was a challenge, but I seem to have regained my strength. I kept telling people, “I can lift things!”
Some acquaintences failed to recognise me. This gives me mixed feelings. Very mixed. I know I’ve changed since I was in The Hague, thank goodness. But it’s weird when somebody doesn’t know who I am at all.
The sc symposium is always fantastic. Next time, I am going to present something. I have a research idea.
My composing time has been somewhat eaten by everything else, but I have energy and ideas and will get caught up shortly.
Renate Wiesser and Julian Rohrhuber: Meaning without Words
Last conference presentation to live blog from the sc symposium
A sonification project. Alberto de Campo is consulting on the project.
A project 7 years in the world, inspired bya test from the 70’s. You can distinguish educated and uneducated background based on how they speak. Sociologists picked up on this. There was an essay about this, using Chomsky’s grammar ideas. Learning grammar as a kid may help with maths and programming. Evidence of how programmers speak would seem to contradict this . . .
But these guys had the idea of sonifying grammar and not the words.
Sapir-Whorf: how much does language influence what we think. This also has implications for programming languages. How does your medium influence your message?
(If this stuff came form the 70’s and was used on little kids, I wonder if I got any of this)
Get unstuck form hearing only the meaning of words.
Corpus Linguistics
don’t use grammar as a general rule: no top down. Instead use bottom up! Every rules comes with an example. Ambiguous and interesting cases.
Elements
- syntax categories – noun phrases, prepositional phreases, verb phrases. These make up a recurive tree.
- word positon: verb, nouns, adverb
- morphology: plural singular, word forms, etc
- function: subject object predicate. <– This is disputed
The linguistics professor in the audience says everything is disputed. “We don’t even know what a word is.”
They’re showing an XML file of “terminals,” words where the sentence ends.
They’re showing an XML file of non-terminals.
Now a graph of a tree – which represents a sentence diagram. How to sonifiy a tree? There are several nodes in it. Should you hear the whole sentence the whole time? The first branch? Should the second noun phrase have the same sound as the first, or should it be different because it’s lower in the tree?
Now they have a timeline associated with the tree.
they’re using depth first traversal.
Now the audience members are being solicited for suggestions.
(My though is that the tree is implicitly timed because sentences are spoken over time. So the tree problem should reflect that, I think.)
Ron Kuivila is bringing up Indeterminacy by John Cage. He notes that the pauses have meaning when Cage speaks slowly. One graph could map to many many sentences.
Somebody else is recommending an XML-like approach with only tags sonified.
What they’re thinking is – chord structures by relative step. This is hard for users to understand. Chord structures by assigning notes to categories. They also though maybe they could build a UGen graph directly from the tree. but programming is not language. Positions can be triggers, syntax as filters.
Ron Kuivila is suggesting substituting other words: noun for noun, etc, but with a small number of them, so they repeat often.
They’re not into this, (but I think it’s a brilliant idea. Sort of reminiscent of aphasia).
Now a demonstration!
Dan Stowell wants to know about the stacking of harmonics idea. Answer: it could lead to ambiguity.
Somebody else is pointing out that language is recursive, but music is repetitive.
Ron Kuivila points out that the rhythmic regularity is coming from the analysis rather than from the data. Maybe the duration should come how long it takes to speak the sentence. The beat might be distracting for users, he says.
Sergio Luque felt an intuitive familiarity with the structure.
Martin Carlé / Thomas Noll: Fourier-Scratching
More live blogging
The legacy of Helmholtz.
they’re using slow fourier transforms instead of fft. sft!
they’re running something very sci-fi-ish, playing FM synthesis. (FM is really growing on me lately.) FM is simple and easy, w only two oscillators, you get a lot of possible sounds. They modulate the two modulators to forma sphere or something. You can select the spheres. They project the complex plane on the the sphere.
you can change one Fourier thing and it changes the whole sphere. (I think I missed an important step here of how the FM is mapped to the sphere and how changing the coefficients back to the FM.)
(Ok, I’m a bit lost.)
(I am still lost.)
Fourier scratching: “you have a rhythm that you like, and you let it travel.”
Ok the spheres are in fourier-domain / time-domain paris. Something about the cycle of 5ths. Now he’s changing the phase of the first coefficient. Now there are different timbres, but the rhythm is not changing.
(I am still lost. I should have had a second cup of coffee after lunch.)
(Actually, I frequently feel lost when people present on maths and the like associated with music. Science / tech composers are often smarter than I am.)
you can hear the coefficients, he says. There’s a lot of beeping and some discussion in german between the presenters. The example is starting to sound like you could dance to it, but a timbre is creeping up behind. All this needs is some bass drums.
If you try it out, he says, you’ll dig it.
Finite Fourier analysis with a time domain of 6 beats. Each coefficient is represented by a little ball and the signal is looping on the same beat. The loops move on a complex plane. The magnitude represents something with fm?
the extra dimension from Fourier is used to control any parameter. It is a sonfication. This approach could be used to control anything. You could put a mixing board on the sphere.
JMC changed the definition to what t means to exponentiate.
Ron Kuivila is offering useful feedback.
Alo Allik: Audiovisual Composition with Three-Dimensional Continuous Cellular Automata
Still live blogging the supercollider symposium
f(x) – audio visual performance environment, based on 3d cellular automata. Uses objective X, but he audio is in scserver.
the continuous cellular automata are values between 0 and 1. The state at the next time step is determined by evaluating the neighbours + a constant. Now, a demo of a 1-d world of 19 cells. All are 0 except for the middle which is 1. Now it’s chugging a long. 0.2 added to all. The value is modulus 1, to just get the fractional part. Changing the offset to 0.5, really changes the results. Can have very dramatic transitions, but with very gradual fades. The images he’s showing are quite lovely. and the 3d version is cool
Then he tried changing the weight of the neighbours, This causes the blobs to sort of scroll to the side. The whole effect is kind of like rain drops falling in a stream or in a moving bit of water in the road. CAn also change the effect by changing the add over time.
Now he’s demoing his program and has allowed us to download his code off his computer. Somehow he’s gotten grids and stuff to dance around based on this. “The ‘World’ button resets the world.” Audience member: “Noooo!”
Now an audio example, that’s very clearly tied in. Hopefully this is in the sample code we downloaded. It uses the Warp1.ar 8 times.
This is nifty. Now there’s a question I couldn’t hear. Alo’s favourite passtime is to invent new mappings. He uses control specs on data from the visual app. There are many many cells in the automata, thus he polls the automata when he wants data and only certain ones.
More examples!
Julian Rohruber: Introducing Sonification Variables
More sc symposium live blogging
sonification
Objectivity is considered important in the sciences. The notions of this have changed quite a bit over the last 50 years, however. The old style of imaging has as much data as possible crammed in, like atlas maps. Mechanical reproduction subsequently becomes important – photos are objective. However, perception is somewhat unreliable. So now we have structural objectivity which uses logic + measurements.
We are data-centric.
What’s the real source of a recording? The original recording? The performer? The score? The mind of the composer?
Sound can be just sound, or it can just be a way of conveying information or something in between. You need theory to understand collected data.
What do we notice when we listen that we wouldn’t have noticed by looking? There needs to be collaboration. Sonification needs to integrate the theory.
In sonfication, time must be scaled. There is a sonification operator that does something with maths. Now there are some formulas on his slide, but no audio examples.
Waveshaping is applying one function to another.
Theoretical physics. (SuperCollider for SuperColliders.) Particles accelerate and a few of them crash. Electrons and protons in this example. There’s a diagram with squiggly lines. Virtual photons are emitted backwards in time? And interacts with a proton? And something changes colour. There’s a theory or something called BFKL.
He’s showing an application that’s showing an equation and has a slider, and does something with the theory, so you can hear how the function would be graphed. Quantum Mechanics is now thinking about frequencies. Also, this is a very nice sounding equation
Did this enable discover anything? No, but it changed the conceptualisation of the theory, very slightly.
apparently, the scientists are also seeking beauty with sonification, so they involve artists to get that?
(I may be slightly misunderstanding this, I was at the club event until very late last night (this morning, actually).)
Ron Kuivila is saying something meaningful. Something about temporality, metaphilosophics, enumeration of state. Sound allows us to hear proportions w great precision, he says. There may be more interesting dynamical systems. Now about linguistics and mathematics and how linguistics help you understand equations and this is like Red Bird by Trevor Wishart.
Sound is therefore a formalisation.
Miguel Negrão: Real time wave field synthesis
Live blogging the sc symposium. I showed up late for this one.
He’s given a summary of the issues of wave field synthesis (using two computers) and is working on a sample accurate real time version entirely in supercollider. He has a sample-accurate version of SC, provided by Blackrain.
The master computer and slave computer are started at unknown times, but synched via an impulse. The sample number can then be calculated, since you know how long it’s been since each computer started.
All SynthDefs need to be the same on both computers All must have the same random seed. All buffers must be on both. Etc. So he wrote a Cluster library that handles all of this, making two copies of all in the background, but looking just like one. It holds an array of stuff. Has no methods, but sends stuff down to the stuff it’s holding.
Applications of real time Wave Field Synthesis: connecting the synthesis with the place where it is spatialized. He’s ding some sort of form of spectral synthesis-ish thing. Putting sine waves close together, get nifty beating, which creates even more ideas of movement, The position of the sine wave in space, gives it a frequency. He thus makes a frequency field of the room. When stuff moves, it changes pitch according to location.
This is an artificial restriction that he has imposed. It suggested relationships that were interesting.
the scalar field is selected randomly, Each sine wave oscillator has (x, y) coords. The system is defined by choosing a set of frequencies, a set of scalar fields and groups of closely tunes sine wave oscillators. He’s used this system in several performances, including in the symposium concert. That had maximum 60 sine waves at any time. It was about slow changes and slow movements.
His code is available http://github.com/miguel-negrao/Cluster
He prefers the Leiden WFS system to the Berlin one.
Julian Rohrhuber: <<> and <>> : Two Simple Operators for Composing Processes at Runtime
Still Live blogging the SC symposium
A Proposal for a new thing, which everybody else here seems to already know about.
NamedControl
a = { |freq = 700, t_trig = 1.0| Decay.kr(t_trig) * Blip.ar(freq) * 0.1}.play
becomes
a = { Decay.kr(trig.tr) * Blip.ar(freq.kr(400) * 0.1}.play; a.set(trig . . .
JITLib
Proxy stuff. (Man, I learned SC 3.0 and then now there’s just all this extra stuff in the last 7 years and I should probably learn it.)
ProxySapce.push(s); ~out.play; ~out = {Dust.ar(5000 ! 2, 0.01) }; ~out.fadeTime = 4 a = NodeProxy(s); a.source = {Dust.ar(5000 ! 2, 0.01) }; Ndef(x, . . .)
(there are too many fucking syntaxes to do exactly the same thing. Why do we need three different ones? Why?!!)
Ndef(x, { BPF.ar(Dust.ar(5000 ! 2, 0.01)) }).play; Ndef(x, { BPF.ar(Ndef.ar(y), 2000, 0.1)}).play; Ndef(y, {Dust.ar(500)})
. . .
Ndef(out) <<> Ndef(k) <<> Ndef(x)
does routing
NdefMixer(s) opens a GUI.
Ron Kuivila asks: this is mapping input. Notationally, you could pass the Ndef a symbol array. Answer: you could write map(map(Ndef(out), in, Ndef(x) . . .
Ron says this is beautiful and great.
Ndef(comb <<>.x nil //adverb action
the reverse syntax just works form the other direction.
Ndefs can feedback, but everything is delayed on block size.