Gig Report: The Royal Vauxhal Tavern

I’ve recently joined a rock band, Helen’s Evil Twin and last night was my first gig with them. It was their highest profile show yet and one of the biggest pop music gigs that I’ve ever played. We were at the Wotever Extravaganza, part of the Royal Vauxhall Tavern‘s Hot August Fringe Festival.
The RVT used to be a music hall. This was a form of mass entertainment that predated things like radio and TV. The working classes would cram into cabaret theatres and watch people sing and play piano and dance, etc. So the club has a long bar on one wall and then the seating is arranged in a sort of semi-circular pattern, facing a small stage. Upstairs, there is a kitchen, an office, a large room holding many stage props and upstairs from that there are dressing rooms and a flat that somebody seems to live in.
Helen, the guitarist (every third British woman is named Helen), and I met early in the afternoon and loaded up her drum kit and two very tiny amplifiers into a hired car and took them to the RVT, but they ended up not being used, as the other band decided an hour beforehand to hire much better gear. In the UK, it seems to be very common that rock acts will share drum kits and amps at shows, whereas, in the States, every band seems to bring their own gear.
The show started at six with some poets and then DJ + dancing and then some cabaret acts and then DJ + dancing. Some of the acts were quite compelling. Two highlights were Jet Moon doing her bit about “Femme Packing,” which is fun. And Michael Twaits did part of his show, Icons and a piece about the Stonewall Riots, which I’ve seen him do about three times now and I get a lump in my throat every time, because it is so very excellent.
I went up to our dressing room to get changed. Taylor, the violinist came up and reported that there was a naked man on stage, speaking like the characters from The Sims, and smearing himself with paint. We tuned our instruments and when we heard echos of Ingo’s voice booming from below, we walked down to have a very brief tech check while another DJ was spinning tunes and a few people danced.

We hadn’t had a sound check, so the tech was asking questions and running cables. It was very suboptimal, but half the band hadn’t been able to show up until after the event started and it probably didn’t make sense to check with only bass and guitar. So we waited around back stage and I tried not to be overly nervous. Hoops, the drummer, said, “this is so exciting! It’s like being back stage before a gig!”

The club was packed, but Wotever audiences are very friendly. I was just concerned about making mistakes, as I’m new and all the songs had completely left my head. I had written out the chords next to song titles on my set list, so if the basslines completely escaped me, at least I could play the right roots.
Ingo came out and announced as and mentioned that I was the new bassist and people cheered. The stage lights were very bright, so I couldn’t really see people, except for one guy close to the front who seemed to fancy me. We started playing through our set and I wasn’t screwing up as much as I feared, so that was ok. And people were cheering and dancing in the front. When Helen said we were nearly done, a large number of people yelled “more” at us. We were like fucking rock stars!
I went upstairs and changed back into my street clothes and then watched the next band, The Blow Waves, who describe themselves as “the campest band in the world.” They were very fun.

Post Mortem

I need to be less nervous and have more stage presence. Also, I need to wear earplugs, as my ears were ringing like mad after wards. Sound checks are almost always a good thing. I think, in general, we should stand farther forward, take up more space, and own the stage more, because we could totally be rock stars. Or at least, I totally want to be a rock star, which is almost the same thing. With screaming fans, dancing people and dressing rooms!
I’m thinking I might want to write a few songs about pop music topics: love, sex and death. And by sex, I mean gender, of course.
Our next gig is in the West Midlands at Worchester Pride, on 22 August at the Mars Bar.

I promised more blogging

I haven’t written about gender stuff for a while. I finally had my appointment with the Charing Cross Gender Clinic, after months of waiting. Fortunately, the shrink had actually read the amusingly stupid report from the previous shrink, so I was not forced to recount my childhood yet again, just a few details of it. I don’t know why they care about it. Some trans people aren’t dysphoric at all before puberty. Heck, some aren’t really dysphoric until well after puberty. And I hate that my unwillingness to skip rope is considered a sign of being trans. It was mostly a sign of being a huge nerd, something that was not tied to gender at all. I was awkward and unathletic. I also was unable to protect my face during dodgeball and hated it too. Does that mean I’m really a girl after all?
They need two appointments before they will give me a referral and they’re understaffed, so appointment number 2 is in february. I might be able to call occasionally and see if something sooner has become available, but I don’t want to feel guilty about queue jumping, so I might not. The UK economy is kind of fucked, so maybe I should just pay privately, especially if I can get a part-time job.
All the gender stuff is still really vital to me, but I just don’t want to talk about it. Somebody on a website had a go at me a few weeks ago about my gender issues and history and it really sucked. So I quit posting anything of import there and I’ve quit posting here and I quit seeing my shrink when T died, but the not-talking-about-it school of dealing with life seems to work as well as the talking-endlessly-about-it approach. After a while, it all gets boring. My cousin had a book called “After Enlightenment, the Laundry.” Like, no matter how fascinating your current thing is, after a while, the mundanity of real life reclaims the center stage.

Speaking of which

In my real life, shortly after I gave my concert in May, my dad came to the UK for a month. He stayed down the street from my flat for a bit and traveled for a bit and then we went to Ireland together and then he went home. In July was gay pride and a bunch of other stuff that seemed to suck up all my energy and now I can’t even remember what it was. Helen and I cycled in a big loop around the Isle of Wight, which was nifty and very hilly. I love biking. August is going to slip quickly past.
I joined a bad called Helen’s Evil Twin. I’m the bassist, so I’m in the non-acoustic line up. My first gig with them is on August 13th. As it happens, this is a high profile gig and a large percentage of people I know in London will be there.
In other news, I’m trying to get caught up with where I should be in my PhD, but this is making the writers block thing worse instead of better. It seems like everything I write takes a long time and then comes out boring. I should write a whole huge amount of stupid crappy pieces, just to get going and then pick the good parts from all of them and combine them into one good piece. Or something. I’m worrying too much and I think I need to do a masterpiece or something. I keep reading about symphony composers from a hundred years ago, and they’re all geniuses who write masterpieces and spend years on them and say something really meaningful. Intellectually, I’m against that, but intellectually, I’m against a lot of things that I can’t actually seem to shake free.
And now, here’s a boring blog post to go with my boring attempts at music lately. I had a conversation with a guy a couple of years ago about how he would rather be crazy and write good music than happy and boring. I’m happier than I was when I had that conversation, but I think I would have ended up musically boring either way.

Blog recommendation

Lonely Gender is genderqueer and is talking about gender issues related to questioning and transition. And it’s really well written and everybody should go read it.
I kind of quit posting on my own blog because I felt like I was saying some problematic sort of internalized badness things. I wrote several months ago that coming out as trans was equivalent to talking about my genitals, which is just wrong. It’s like saying that being gay is all about genitals. It’s just not. And I thought that I didn’t need to be inflecting that crap on the world, so I shut up.
I kind of miss blogging and might start it up again in earnest.

A learning experience

This weekend is full of BEAST concerts at the CBSO centre in Birmingham. These events always involve copious consumption of alcohol, so I’m writing this while not entirely sober, but I think drinking was called for.
Friday night was the student pieces and I actually played something, my first gig with a huge speaker array. There were more than 80 speakers around the room. There’s some special software written which people can use to disperse their tape pieces. It works well with stereo or 8 channel pieces, but can handle more inputs. I’ve never used this software and I’m inexperienced with how to diffuse stuff. I have no idea how one would map a stereo file to 80 speakers, so I thought it would be easier if I just used the hardware interfaces and not the software. I wrote a piece that sounded ok in stereo and then went in to uni last week to figure it out in 8 channels and got a rough map of what I wanted and went home and implemented it. When I got the list of what speakers were going where for this weekend, I changed my code so that it would use ones of them that seemed reasonable.
I had half an hour allocated for the dress rehearsal. I was pretty sure my code was going to work with only minor changes needed for amplitudes. It wouldn’t run at all. I made a change. The first minute of it ran. I made another change, I got 2 minutes out. I was out of time and the rest was not working at all. Only a few sections even made noise. The rest were silent or missing crucial bits. And I had no more time on the computer.
I went to hammer it on my laptop. Scott, my supervisor, gave me some advice and showed me how to get a bunch of level meters, so I could see if channels had audio, even if I couldn’t hear it from my laptop. I made changes, just trying arbitrary approaches to see what helped. And then just trying to hold it together, like it was made of gaffer tape and blue tack. Scott said I could have a few minutes to check it during the interval between concerts, so it was a last second test.
I got on the system during the interval and ran my piece and it sounded like hell. The first few bits were ok, but then it just degraded. It was playing the wrong samples and the sound quality was crap, as if the frequency for the filter had been set to zero. It would have been hypothetically ok that it wasn’t what I wrote, if it had worked musically at all. I turned to Scott and said it sounded like the Synth library wasn’t loading correctly. I had 5 minutes before the concert was scheduled to start, and they wanted to start on time. I was trying to decide if it was worth just playing the first 3 minutes and then fading out. He squinted at my code and said, “oh, it’s not loading correctly, because you can’t do it that way” and told me what to change. I did the change and then ran about 30 seconds of that part and it worked.
They started letting people in just then. I realized that his fix meant that a lot of the other missing stuff was also going to come back, along with all the crap I added trying desperately to get the thing to work. I didn’t know what it was going to sound like, just that end might be really really loud.
So, I had a piece that I had cobbled together from wreckage over the course of the afternoon, that I had never heard before that I was going to play in front of a live audience. I was ready to fade it out whenever it started wheezing towards death.
So I started playing it and had control only over the master volume, so when volumes turned out to be wildly wrong, I could only turn the whole thing up or down. And some stuff came out of speakers that weren’t the ones that I should have picked. And the very last section was extrodinarily loud because of desperate repairs that I had tried to make that suddenly all started working at once. Everytime something started playing, I breathed a sigh of relief. And a few bits were missing (some glitches also strangely vanished) but it was 90% of what I wrote. Again, with 3 minutes to spare, we found the bug and were barely able to test and it played ok, through to the end.
It was not the most stressful performance experience I’ve ever had, but it was close. It’s the most stressful one I’ve ever been able to pull off.
I learned some stuff from this. Incidentally, my horoscope last week said I shouldn’t just write how-to documents, I should share stories of how I came to want or need such a howto and I wondered how this astrological advice might apply to me. I am not asking the stars for a demonstration again!

learned

SuperCollider, by default, can only deal with a certain number of busses and UGen-type connections. You have to set a ServerOption when you have a huge speaker array: s.options.numWireBufs. I set it to 1024 and that fixed that problem. Incidentally, this shortage of WireBufs did not give me error messages until I tried moving the master fader up and down. Then I got a bunch of node not founds and another one that seemed more topical. This one explained a large portion of why my piece wasn’t working.
You can get a bunch of graphical meters (one for every defined output channel) by clicking on the server window to make it active and then typing the letter ‘l’. This is apparently undocumented, but it is really very helpful.
If you want to change amplitudes with a fader or whatever in a live situation, the best way to do it is with busses. Normally, I would change it in a pbind, so that the next one would be louder or softer, but if you’re playing longer grains, the feedback isn’t fast enough to respond to issues that come up in real time and slow changes are hard to hear so you can’t tell if your change is even working or not. Live control busses are good.
Don’t write any amplitude changes that cause peaking. I happen to think that digital peaking sounds good with floating points numbers. However, the BEAST system is cranked and so are many other systems. If you want to peak, you’ve got to either turn all the speakers down (yeah right) or fake it. Without the peaking, what was supposed to be a modest crescendo became stupidly huge.

memStore

When I was working on the piece, I started to suspect that I was having issues with writing my SynthDefs to disk. I thought my older versions were persisting, so I decided not to write them at all, but just load them into memory and send them to the server. So I did SynthDef(blahblah, { . . .}).send(s); You cannot do that. You must memStore it. If you send it to the server, your pbinds don’t know about what the variables are for the synthdef and instead of just sending it with the ones you provide, it sends crap.
This is a bug in SuperCollider. Yeah, it’s documented, but just because a bug appears in the help files doesn’t mean it’s not a bug. There are too many different ways to save synthdefs. There should be two. One should write it to the disk and do all the right things. One should keep it in memory and do all the right thing. I doubt there is a compelling argument as to why you would want to be able to send a synthdef to the server but not be able to use it with a pbind. Any argument in favor of that has to be pretty fucking compelling, because it really makes no sense. It’s confusing and nonsensical. Frankly, sending a synthdef that you can’t use in every case should be an exception and not the norm, so perhaps it could be set as an obscure option, not the method shown most often in helpfiles. The send/pbind is broken and needs to be fixed.
And while we’re at it, pbinds are not innocent in this. If I set a bunch of environment variables as if they were arguments to a synth, what good reason is there for not sending them as arguments?
I’m fine with there being more than one way to do things and every language has it’s obscurities, but supercollider is overdoing it a bit. These common things that you do all the time trip up students, not just because they’re inexperienced, but because these idiosyncrasies are errors in the language. This isn’t French! It doesn’t help noobs or old timers to have weird crap like this floating around. Flush it! My whole piece was almost sunk by this and I am having trouble believing that whatever advantage it might provide is worth the exposure to accidental error it causes. The need is obscure, make the invocation obscure. It’s like setting traps otherwise.
But hey, it al kind of worked. I might do a 5.1 mix next, especially, if I can find such a system to work with. Otherwise, look out for stereo.

More Tuning

While continuing to ponder tuning, I realized that it would be possible to create a dissonance curve for just intonation. Instead of judging how close the frequencies are to each other to look for roughness, you would look at what tuning ratio they described. If one frequency was 1.5 times the other, then that’s a ratio of 3 / 2. Then add the numerator and denominator to get 5. Then scale by amplitude.
In Sethares’ dissonance curves, you get scale degrees by searching for minima in the curve, but that approach is not a meaningful way to sort just ratios. Instead, they can be sorted by their relative dissonance.
I’ve updated my class, DissonanceCurve.sc (and fixed the url. sorry) so it can do just curves also. I ran it with the following (rough draft-ish) code:

 b = Buffer.alloc(s,1024); // use global buffers for plotting the data
 c = BufferTool.open(s, "sounds/a11wlk01.wav"); 

// when that's loaded, evaluate the following
(

 Task({

   d = SynthDef(foo, 
    { FFT(b, PlayBuf.ar(1, c.bufnum, BufRateScale.kr(c.bufnum))); 0.0 }).play;

  0.2.rand(3.7).wait;

   e = DissonanceCurve.newFromFFT(b, 1024, highInterval: 2, action: {arg dis;
 
    var degr, top5;
 
    d.free;

     dis.scale.do({ |deg|
  
      postf(" % / %,",
        deg.numerator, deg.denominator);
     });
    "n---just---".postln;
    dis.scale.size.max(25).do({ |index|
 
     degr = dis.just_scale[index];
        postf(" % / %,",
         degr.numerator, degr.denominator);
     }); 

 
   });
 
 
 }).play
)

And, after seriously heating up my computer, and waiting a bit, I got the following output:

 1 / 1, 29 / 28, 6 / 5 , 5 / 4 , 33 / 26, 4 / 3, 15 / 11, 29 / 21, 7 / 5, 10 / 7, 
3 / 2 , 8 / 5, 5 / 3, 27 / 16, 49 / 29, 12 / 7, 67 / 39, 7 / 4, 17 / 9, 2 / 1, 
---just---
 1 / 1, 3 / 2, 2 / 1, 4 / 3, 5 / 4, 6 / 5, 7 / 6, 9 / 8,  8 / 7, 10 / 9, 5 / 3, 
12 / 11, 15 / 14, 13 / 12, 11 / 10, 16 / 15, 9 / 7, 7 / 5, 14 / 13, 22 / 21, 7 / 4, 
10 / 7, 21 / 20, 11 / 8, 11 / 9,

The top section is the Sethares algorithm dissonance curve. I made a minor adjustment so that it looks at fractions one cent on either side of them minima and grabs the simpler one if it exists. (This is optional, add “simplify: false” to the method invokation to turn it off.)
The bottom section is the 25 least dissonant just ratios. Looking first at those, note that 1/1 is the least dissonant,as one would expect. Usually, 2/1 would be next, but note that in this case, it’s 3/2 instead. The algorithm does favor low number ratios, which is logical. Notice, also, that it tends to favor smaller numbers. There are a lot of (d+1)/d fractions: 4/3, 5/4, 6/5, 7/6, 9/8. It hugely favors these numbers. The top half of the octave is under represented. I do not know why this is so.
But Sethares’ algorithm, because it uses the critical band, tends to favor higher pitches as more consonant. However, since we search for minima rather than order the intervals by dissonance, this tendency’s effect on the results is reduced.
Both of these computations of dissonance seem to give meaningful data that does seem to have some kind of correlation to each other. On both lists we find, 6/5, 5/4, 4/3, etc. However, the length of the list of just ratios is arbitrary. If we take only the Sethares intervals that are in the top 5% most consonant (least dissonant) just intervals, we are left with:

 1/1, 29/28, 6/5, 5/4, 4/3, 15/11, 7/5, 10/7, 3/2, 8/5, 5/3, 12/7, 7/4,
2/1

Of those, 29/28 is the most dissonant, by both measures, so it may not be a the best scale degree. If that’s the case, then the top 5% is not the best cutoff. So what is? How do we choose it?
On the other hand, one way that just intonation is corralled is through factorization limits. For example, 7-limit tuning means that all the numbers in the ratios must be multiples of numbers that are less than 7. So 14 is ok (7 * 2), but 11 and 13 are not, as they’re prime and greater than 7. If we were to apply a 7-limit to the Sethares curve, the scale we would have is

1/1, 6/5, 5/4, 4/3, 7/5, 10/7, 3/2, 8/5, 5/3, 27/16, 12/7, 7/4, 2/1

Is that better? Does the 27/16 (aka: (3*3*3)/(4*4)) impact that?
Alas, we can’t use our ears because we don’t know what moment of the source was measured. But we can use our ears with a synthetic sound whose frequency content is known.

f = [50] ++ ( [50/27, 18/7, 54/25, 25/27, 9/7, 27/25, 25/54, 9/14, 27/50] * 300);
a = [0.055, 0.1, 0.1, 0.1, 0.105, 0.105, 0.105, 0.11, 0.11, 0.11];
e = DissonanceCurve.new(f, a, 2);

With some print statements, abbreviated for the sake of not being too boring, we get a Sethares scale of 1, 7/6, 25/21, 25/18, 36/25, 42/25, 12/7, 2, which, note, falls within a 7-limit. For the top 8 just results, we get, 1, 3/2, 6/5, 5/4, 7/5, 5/3, 34/27, 10/9. A list which does not include 2! And if we do the top 5% thing described above, we get, 1, 7/6, 25/21, 25/18, 36/25, 2. And we can compare these aurally:

(
 SynthDef("space", {|out = 0, freq = 440, amp 0.2, dur = 1, pan = 0|
  var cluster, env, panner;
 
  // detune
  freq = freq + 2.0.rand2;
 
  cluster = 
  SinOsc.ar(50, 1.0.rand, 0.055 * amp) + 
  SinOsc.ar((freq * 50/27) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 18/7) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 54/25) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 25/27) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 9/7) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 27/25) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 25/54) + 1.0.rand2, 1.0.rand, 0.11 * amp) + 
  SinOsc.ar((freq * 9/14) + 1.0.rand2, 1.0.rand, 0.11 * amp) + 
  SinOsc.ar((freq * 27/50) + 1.0.rand2, 1.0.rand, amp * 0.11);
 
  env = EnvGen.kr(Env.perc(0.05, dur + 1.0.rand, 1, -4), doneAction: 2);
  panner = Pan2.ar(cluster, pan, env);
  Out.ar(out, panner);
 }).send(s);
)
(
   Pbind(
  //Sethares
    freq,  Prand([1, 7/6, 25/21, 25/18, 36/25, 42/25, 12/7, 2] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)
(
   Pbind(
  // Just
    freq,  Prand([1, 3/2, 6/5, 5/4, 7/5, 5/3, 34/27, 10/9] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)

(
   Pbind(
  // Top 5%
    freq,  Prand([1, 7/6, 25/21, 25/18, 36/25, 2] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)

Which of those pbinds do you think sounds best? Leave a comment.

Questions about Differing Approaches to Tuning

Let’s start by all admitting that Equal Temperament is a compromise and that computers can do better. They’re fast at math and nothing physical needs to move, so we can can do better and be more in tune. (The next admission is that I haven’t had the attention span to actually read all the way through the Just Intonation Primer, although it is a very informative book and everyone should buy a copy and actually read it. Nor have a I read Tuning Timbre Spectrum Scale, alas.)
When we say “in tune,” what does that actually mean? On the one hand, we are talking about beating. You know that when you’re trying to tune two sound-generating objects playing the same note, there’s weird phasing and stuff that happens until you get it right. The beating sound you get when tuning a guitar. There’s also just a sort of roughness you hear when you play two notes that are really close to each other, like a C and a C# together. Both of these things seem to have something to do with being in tune and both suggest possible approaches.
Just Intonation, seem to be all about beating and zero crossings. Note relationships described with ratios in which the numerator and denominator are small, whole numbers have less beating. This is because when the waveforms cross, it’s at the low energy position, so they don’t interfere. 3/2 is thus very in tune. You can compute the amount of dissonance by adding the numerator to the denominator. Lower numbers are more in tune.
Bill Sethares, though, likes ten tone equal temperament (and writing songs in Klingon) and came up with some timbres that sound good in such a strange tuning. He’s got some math about dissonance curves. The roughness mentioned above has to do with how our ears work and critical bandwidth. If we hear two tones that are close to each other in pitch, the ear hairs they stimulate overlap, so the interfere with each other and create roughness. We can take a timbre and see how much internal roughness it has and then transpose it a bit and measure the roughness of the original and the transposed version played at the same time. Do this a bunch of times and you get a curve. The minima on the curve are good scale degrees.
Both of these approaches are perceptual and both seem to be in conflict. They both seem to use different parts of our perception, one more around critical band and the other more around amplitude and phase. So I wonder how to get them to work together? I can compute a dissonance curve that goes from 1 -1200 cents, but if I do it from a FFT’s spectrogram, the data I’m putting in is inexact. It only knows 512 frequencies, each of them slightly blurred and I’m using it with 1200 transpositions. Also, the transpositions are appropriately logarithmic, but the bins of the FFT are not, they’re linear. Should I do a similarly linear comparison and save myself a lot of unnecessary computation or does it make sense to do it by cents? Since I know there are artifacts in the spectrogram, should I find the minima and then search for “good” tuning ratios near them? Should the internal dissonance in the sample change the approach that I use?
I ported Sethares’ code to SuperCollider. You can download a working draft of DissonanceCurve.sc, if you desire. It’s quite math intensive for FFTs, but I have a synthDef made up of SinOsc, which is easy to analyze, since all the frequencies and amplitudes are known and there aren’t many of them. The freqs are in f and the amps are in a:

f = [50] ++ ( [50/27, 18/7, 54/25, 25/27, 9/7, 27/25, 25/54, 9/14, 27/50] * 300);
a = [0.055, 0.1, 0.1, 0.1, 0.105, 0.105, 0.105, 0.11, 0.11, 0.11];

e = DissonanceCurve.new(f, a, 2);
e.scale.do({ |deg|
  
 postf("Interval % - Dissonance %tRatio % / % n",
  deg.interval, deg.dissonance, deg.numerator, deg.denominator);
});

Which prints

Interval 1 - Dissonance 0.93194694734913 Ratio 1 / 1 
Interval 1.1667536657322 - Dissonance 1.1967977845161 Ratio 7 / 6 
Interval 1.1905817347928 - Dissonance 1.1395899373121 Ratio 25 / 21 
Interval 1.3883134504798 - Dissonance 0.92933737113208 Ratio 118 / 85 
Interval 1.4405968618317 - Dissonance 0.95473900132736 Ratio 85 / 59 
Interval 1.6798510690642 - Dissonance 0.79165734602377 Ratio 42 / 25 
Interval 1.7141578884562 - Dissonance 0.80288033573481 Ratio 12 / 7 
Interval 2 - Dissonance 0.49046268655094 Ratio 2 / 1 

118 / 85 is not a ratio of small, whole numbers, but it’s apparently less dissonant than 7 / 6 or even 85/59 or even the internal dissonance of the source sound! But, if we look in the curve, we can find the ratios 1 cent distant on either side of 118 / 85:

Interval 1.3875117607442 - Dissonance 0.9386025721761 Ratio 111 / 80 
Interval 1.3883134504798 - Dissonance 0.929337371132 Ratio 118 / 85 
Interval 1.3891156034233 - Dissonance 0.92966297781753 Ratio 25 / 18 

25 / 18 is a much smaller ratio and a distance of 1 cent is not perceivable, so it’s probably a better number. But I am still slightly confused / unconvinced. Note also, that sounds closer to 2/1 are all, in general, less dissonant that sounds closer to 1/1, because of the nature of the algorithm / critical bandwidth. But for just intonation, an inversion is barely more or less dissonant than it’s non-inverted form.
Also, an issue: the width of the critical band changes in different frequency ranges and I think it might help to use the Bark scale or something in the Dissonance Curve code, but the math is, as yet, a bit beyond me.
For the purposes of showing off, here’s a silly example with FFTs, which is not at all real time:(WARNING: THIS IS SLOW!)

 b = Buffer.alloc(s,1024); // use global buffers for plotting the data
 c = BufferTool.open(s, "sounds/a11wlk01.wav"); 
 d = { FFT(b, PlayBuf.ar(1, c.bufnum, BufRateScale.kr(c.bufnum))); 0.0 }.play;

// when that's playing, evaluate the following

 e = DissonanceCurve.newFromFFT(b, 1024, highInterval: 2, action: {arg dis;
 
  dis.scale.do({ |deg|
  
   postf("Interval % - Dissonance %tRatio % / % n",
    deg.interval, deg.dissonance, deg.numerator, deg.denominator);
  });
 });

Go and get a snack while that’s going. Make a cup of tea. You won’t be able to do anything else with SuperCollider until it finishes, so leave some comments about tuning. How should I be trying to combine dissonances curves and Just Intonation?
(My result for the code above (timing matters) was:

Interval 1 - Dissonance 2.4284846123288 Ratio 1 / 1 
Interval 1.0346671040459 - Dissonance 2.9055490440413 Ratio 30 / 29 
Interval 1.0557976305092 - Dissonance 2.9396229209406 Ratio 19 / 18 
Interval 1.0588513011885 - Dissonance 2.9394283497832 Ratio 18 / 17 
Interval 1.0625273666152 - Dissonance 2.9404120579786 Ratio 17 / 16 
Interval 1.0767375682475 - Dissonance 2.9248076874065 Ratio 14 / 13 
Interval 1.1114938763335 - Dissonance 2.8528563216285 Ratio 10 / 9 
Interval 1.1250584846888 - Dissonance 2.8384180012931 Ratio 9 / 8 
Interval 1.1302693892732 - Dissonance 2.8422250578475 Ratio 26 / 23 
Interval 1.1335384537169 - Dissonance 2.8404168678269 Ratio 17 / 15 
Interval 1.1667536657322 - Dissonance 2.773742908553 Ratio 7 / 6 
Interval 1.2002486666653 - Dissonance 2.6741210623142 Ratio 6 / 5 
Interval 1.2497735102289 - Dissonance 2.5747254321313 Ratio 5 / 4 
Interval 1.2628354511916 - Dissonance 2.5907859768328 Ratio 24 / 19 
Interval 1.2664879348481 - Dissonance 2.5910058160679 Ratio 19 / 15 
Interval 1.2856518332381 - Dissonance 2.5784271202225 Ratio 9 / 7 
Interval 1.3332986770912 - Dissonance 2.4554262314412 Ratio 4 / 3 
Interval 1.3503499461682 - Dissonance 2.4863589672953 Ratio 27 / 20 
Interval 1.3573881591926 - Dissonance 2.4874055968135 Ratio 19 / 14 
Interval 1.3755418181397 - Dissonance 2.469278592016 Ratio 11 / 8 
Interval 1.3811148862791 - Dissonance 2.4674194148261 Ratio 29 / 21 
Interval 1.3843096285337 - Dissonance 2.4676720796587 Ratio 18 / 13 
Interval 1.3891156034233 - Dissonance 2.4680185402198 Ratio 25 / 18 
Interval 1.4003945316219 - Dissonance 2.4587789993728 Ratio 7 / 5 
Interval 1.4289941397411 - Dissonance 2.432946313225 Ratio 10 / 7 
Interval 1.5000389892858 - Dissonance 2.2587031579717 Ratio 3 / 2 
Interval 1.5262592089606 - Dissonance 2.3003029027958 Ratio 29 / 19 
Interval 1.529789693524 - Dissonance 2.299895874529 Ratio 26 / 17 
Interval 1.5333283446696 - Dissonance 2.2993307022943 Ratio 23 / 15 
Interval 1.555631119012 - Dissonance 2.2871143779032 Ratio 14 / 9 
Interval 1.5619338268699 - Dissonance 2.2878440907054 Ratio 25 / 16 
Interval 1.6002899594453 - Dissonance 2.2385589006945 Ratio 8 / 5 
Interval 1.6114208563635 - Dissonance 2.2381572659306 Ratio 29 / 18 
Interval 1.6188844330948 - Dissonance 2.236239217168 Ratio 34 / 21 
Interval 1.625443414535 - Dissonance 2.2331690259349 Ratio 13 / 8 
Interval 1.6663213678518 - Dissonance 2.1624083601251 Ratio 5 / 3 
Interval 1.68763159226 - Dissonance 2.1765673941027 Ratio 27 / 16 
Interval 1.7141578884562 - Dissonance 2.1648475763908 Ratio 12 / 7 
Interval 1.7501759894904 - Dissonance 2.1359651045669 Ratio 7 / 4 
Interval 1.8004197968362 - Dissonance 2.0659411117752 Ratio 9 / 5 
Interval 1.8340080864093 - Dissonance 2.0362153732133 Ratio 11 / 6 
Interval 2 - Dissonance 1.6432830079835 Ratio 2 / 1 

Yikes)

Some Source Code

Yesterday, I posted some links to a supercollider class, BufferTool and it’s helpfile. I thought maybe I should also post an example of using the class.
I wrote one piece, “Rush to Excuse,” in 2004 that uses most of the features of the class. Program notes are posted at my podcast. And, The code is below. It requires two audio files, geneva-rush.wav and limbaugh-dog.aiff You will need to modify the source code to point at your local copy of those files.
The piece chops up the dog file into evenly sized pieces and finds the average pitch for each of them. It also finds phrases in Rush Limbaugh’s speech and intersperses his phrases with the shorter grains. This piece is several years old, but I think it’s a good example to post because the code is all cleaned up to be in my MA thesis. Also, listening to this for the first time in a few years makes me feel really happy. Americans finally seem to agree that torture is bad! Yay! (Oh alas, that it was ever a conversation.)

(

 // first run this section

 var sdef, buf;
 
 
 // a callback function for loading pitches
  
 c = {

  var callback, array, count;

    array = g.grains;
    count = g.grains.size - 1;
     
    callback = { 
   
     var next, failed;
      
     failed = true;
      
     {failed == true}. while ({
    
      (count > 0 ). if ({
    
       count = count -1;
       next = array.at(count);
    
       (next.notNil).if ({
        next.findPitch(action: callback);
        failed = false;
       }, {
        // this is bad. 
        "failed".postln;   
        failed = true;
       });
      }, { 
       // we've run out of grains, so we must have succeeded
       failed = false;
       "pitch finding finished".postln;
      });
     });
    };
   
   };

 
 // buffers can take a callback function for when they finish loading
 // so when the buffer loads, we create the BufferTools and then
 // analyze them
  
   buf = Buffer.read(s, "sounds/pundits/limbaugh-dog.aiff", action: {
  
    g = BufferTool.grain(s, buf);
    h = BufferTool.grain(s, buf);
    "buffers read!".postln;
   
  g.calc_grains_num(600, g.dur);
    g.grains.last.findPitch(action: c.value);
    h.prepareWords(0.35, 8000, true, 4000);
 
   });
 
   i = BufferTool.open(s, "sounds/pundits/geneva-rush.wav");

  
   sdef = SynthDef(marimba, {arg out=0, freq, dur, amp = 1, pan = 0;
  
   var ring, noiseEnv, noise, panner, totalEnv;
   noise = WhiteNoise.ar(1);
   noiseEnv = EnvGen.kr(Env.triangle(0.001, 1));
     ring = Ringz.ar(noise * noiseEnv, freq, dur*5, amp);
     totalEnv = EnvGen.kr(Env.linen(0.01, dur*5, 2, 1), doneAction:2);
     panner = Pan2.ar(ring * totalEnv * amp, pan, 1);
     Out.ar(out, panner);
    }).writeDefFile;
   sdef.load(s);
   sdef.send(s);

   SynthDescLib.global.read;  // pbinds and buffers act strangely if this line is omitted

)

// wait for: pitch finding finished

(

 // this section runs the piece

 var end_grains, doOwnCopy;
 
 end_grains = g.grains.copyRange(g.grains.size - 20, g.grains.size);
 

 // for some reason, Array.copyRange blows up
 // this is better anyway because it creates copies of
 // the array elements

 // also:  why not stress test the garbage collector?
 
 doOwnCopy = { arg arr, start = 0, end = 10, inc = 1;
 
  var new_arr, index;
  
  new_arr = [];
  index = start.ceil;
  end = end.floor;
  
  {(index < end) && (index < arr.size)}. while ({
  
   new_arr = new_arr.add(arr.at(index).copy);
   index = index + inc;
  });
  
  new_arr;
 };

 
 
 Pseq( [


  // The introduction just plays the pitches of the last 20 grains
  
  Pbind(
 
   instrument, marimba,
   amp,   0.4,
   pan,   0,
   
   grain,   Pseq(end_grains, 1),
   
   [freq, dur],
      Pfunc({ arg event;
     
       var grain;
      
       grain = event.at(grain);
       [ grain.pitch, grain.dur];
      })
  ),
  Pbind(
  
   grain, Prout({
   
      var length, loop, num_words, loop_size, max, grains, filler, size,
       grain;
      
      length = 600;
      loop = 5;
      num_words = h.grains.size;
      loop_size = num_words / loop;
      filler = 0.6;
      size = (g.grains.size * filler).floor;
      
      
      grains = g.grains.reverse.copy;

      // then play it straight through with buffer and pitches

      {grains.size > 0} . while ({
        
       //"pop".postln;
       grain = grains.pop;
       (grain.notNil).if({
        grain.yield;
       });
      });
      
      
      loop.do ({ arg index;
      
       "looping".postln;
      

       // mix up some pitched even sizes grains with phrases

       max = ((index +2) * loop_size).floor;
       (max > num_words). if ({ max = num_words});
       
       grains = 
         //g.grains.scramble.copyRange(0, size) ++
         doOwnCopy.value(g.grains.scramble, 0, size) ++
         //h.grains.copyRange((index * loop_size).ceil, max);
         doOwnCopy.value(h.grains, (index * loop_size).ceil, max);
         
       

       // start calculating for the next pass through the loop
       
       length = (length / 1.5).floor;
       g.calc_grains_num(length, g.dur);
       g.grains.last.findPitch(action: c.value);
       
       grains = grains.scramble;
       

       // ok, play them
       
       {grains.size > 0} . while ({
        
        //"pop".postln;
        grain = grains.pop;
        (grain.notNil).if({
         grain.yield;
        });
       });
      });
      
      i.yield;
      "end".postln;
     }),
   [bufnum, dur, grainDur, startFrame, freq, instrument], 
    Pfunc({arg event;
    
     // oddly, i find it easier to figure out the grain in one step
     // and extract data from it in another step
     
     // this gets all the data you might need
    
     var grain, dur, pitch;
     
     grain = event.at(grain);
     dur = grain.dur - 0.002;
     
     pitch = grain.pitch;
     
     (pitch == nil).if ({
      pitch = 0;
     });
     
     [
      grain.bufnum,
      dur,
      grain.dur,
      grain.startFrame,
      pitch,
      grain.synthDefName
     ];
     
    }),
      
      
   amp,   0.6,
   pan,   0,
   xPan,   0,
   yPan,   0,
   rate,   1,
   
   
   twoinsts, Pfunc({ arg event;
       
     // so how DO you play two different synths in a Pbind
     // step 1: figure out all the data you need for both
     // step 2: give that a synthDef that will get invoked no matter what
     // step 3: duplicate the event generated by the Pbind and tell it to play
       
       var evt, pitch;
       
       pitch = event.at(freq);
       
       (pitch.notNil). if ({

        // the pitches below 20 Hz do cool things to the 
        // speakers, but they're not really pitches,
        // so screw 'em
        
        (pitch > 20). if ({
         evt = event.copy;
         evt.put(instrument, marimba);
         evt.put(amp, 0.4);
         evt.play;
         true;
        }, {
         false;
        })
       }, {
        // don't let a nil pitch cause the Pbind to halt
        event.put(freq, rest);
        false;
       });
      })
        
  )      
      
       
 ], 1).play
)
 

This code is under a Creative Commons Share Music License

BufferTool

A while back, I wrote some code and put it in a class called BufferTool. It’s useful for granulation. Any number of BufferTools may point at a single Buffer. Each of them knows it’s own startFrame, endFrame and duration. Each one also can hold an array of other BufferTools which are divisions of itself. Each one may also know it’s own SynthDef for playback and it’s own amplitude. You can mix and match arrays of them.
You can give them rules for how to subdivide, like a set duration of each grain, a range of allowable durations or even an array of allowed duration lengths. Or, it can detect pauses in itself and subdivide according to them. It can calculate the fundamental pitch of itself.
I want to release this as a quark, but first I’d like it if some other people used it a bit. The class file is BufferTool.sc, and there’s a helpfile and a quark file.
Leave comments with feedback, if you’d like.

Manners

British people keep telling me I’m very polite. In fact, my girlfriend complained that I’m too polite. I keep thinking this would be a shocking revelation to people at home, who seem to have rather the opposite idea about me. I’ve come up with a few possible reasons for this change:
Dry humor. The British sense of humor involves a lot of sarcasm. Maybe they’re all saying this because it’s not true. Alas.
I’ve changed. Maybe with age I’ve gained a bit of tact and whatnot?
Cultural differences. Maybe Americans just have much higher standards than Brits. Also, I’m not exactly hanging out with the royal family. And, to be fair, it seems they would have even lower standards.
Gendered expectations. It’s possible that people expect a lot less from men than women. Or perhaps what they expect is just different and I did not conform to the standard female model and my laxish manners are good enough for blokes? I find this explanation both likely and annoying. Casual sexism is bad, people!
Anyway, none of this pondering matters to those of you who remember the good old days when I used to run around in my underwear while belching as loudly as possible. Ah, good times. I miss them.

The Swine Flu / The Economy

What’s your take on the media? Pick one.
The swine flu:

  1. is a distraction from the failures of capitalism – which are solvable with collective action.
  2. we’re all jonesing for the apocalypse.
  3. the inevitable consequences of farming and/or slaughtering animals – which also accounts for diseases like the bird flu and HIV.
  4. I caught the sniffles at the last tea-bagging rally I went to. Should I be worried?

Speaking of tea-bagging rallies, a commenter on my previous post suggested that it would be a waste of effort to try to connect with the people at these things because of massive disagreement on issues. I think that the masses on the left and the right actually have quite a bit of populist rage in common: why are my taxes going to bankers?! What differs is largely our answers to that question and out ideas how to fix it. There are those who will be swayed by a fascist argument. Many of those people, though, are not stupid, just misinformed. The fascist argument is the only one that they’ve heard.
Some of the people at the tea rallies have a hard time giving coherent answers to journalists’ questions. Similarly, many people at the G20 rallies also had trouble formulating a coherent answer. Part of the reason for this is because the frames and assumptions for the questions are trying to obscure rather than enlighten. They’re asking the wrong questions. Here’s the answer to the right question: We on the left and the right are all similarly angry that people with too much power and no accountability decided to use all of our resources and wealth as play money in a giant game and now we’re facing artificial shortages and the folks that caused all this get to keep all the money they stole.
When the economy gets fucked, we’re in a precarious situation not just economically, but also due to the danger of fascism. If we want to feel superior to people being recruited to fascism, we’re doomed. That’s not a reasonable strategy. People will be radicalized by the economy. There will be a surge of power on the very far right. Lefty smugness is enabling to fascists. If we want to limit this, we need to be talking to people about why our solutions are going to be better for them.
Anti-capitalists are actually correct, which ought to be a serious advantage. But our smugness is dressed up classism and we need to get over it or we’ve got no answers, just more status-quo.