Live blogging Live.Code.Festival: Benoit and the Mandelbrots by Mattias Schneiderbanger

Drop function – executed simultaneously for all 4 players.
They have done blank slate live coding in many environments. They also use live coding as a compositional method, so do some shows where they just use a code interface developed in rehearsals.
Delbrots and the Man also develop code live during rehearsals and use that as an interface for performance. They sync with the drummer via click track and send their chat window to him via a text-to-speech synthesiser.
If they want the audience to dance, they start with prepared stuff. They also try to think of the arc of the whole evening. In rehearsals, they would pick a random genre from Id3 tags.

More General Thoughts on Live Coding

Live code does not represent a score. code consists of algorithms which are specific, but a a score is interpretable in different ways. Also the text document generated by live coding it not an adequate artefact to repeat a performance
Code allows for de-heirarchicalisation of all musical parameters. Traditional composition focusses on pitch and duration, but improv allows focus on other parts. Live coding emphasises this further.
Composition creates a text – an artefact designed to enable people to create sound. It is prepared and worked out. Live coding does not necessarily generate a written composition. However, in the 21st century, improv and composition are not binary oppositions, something which also applied to live coding.

Questions

Did they publish the silent movie with their sound track? Not yet, because they’re not sure about copyright.
what’s next for the Mandelbrots? Will they make a ton of recordings? Recordings do no change their approach. They only record only their rehearsals.
do they program differently when they’re recording? No, they’ve gotten used to just recording all their rehearsals.
Will they edit their recordings? Unsure.
Will an audience expect them to sound like their records? They can’t know yet.
Do they put performances online? They’ve done that twice. Once to Mexico

Why aren’t there More Women Around Here?

Every so often, the topic of diversity comes up in electronic music. Women definitely make up less than 50% of participants – including in the forums where this topic is discussed. Since I’ve moved to the UK, I’ve seen a few email flurries where men argue about whether or not its a problem that there are so few women participating and if so, what they can do about it. These arguments themselves are probably somewhat off-putting, as there are always at least a few vocal men who like being in a boys club and will argue that things are fine. Even if everybody started from a pro-diversity standpoint, I doubt it would be a particularly fun conversation for the few women who were on the list, lurking. This is why I think efforts like MzTech, Flossie, G-Hack and ETC are a good idea, despite all the places where they’re problematic (which is beyond the scope of this post).
Women-only events do seem to be how the UK is best able to cope with the massively huge tech gap. This gap, by the way, gets more pronounced as level of techiness rises. As far as I’ve been able to determine, there are fewer than five women who are regular SuperCollider users in the UK. This is absolutely a social problem. It seems to be the case that in Japan, women users are roughly equal in numbers or possibly greater than men. Thus there is nothing inherently woman-unfriendly in the programme.
Meanwhile, in America, there is still a gap, but it seems less bad. I don’t have solid numbers, but I’ve seen women at American conferences and they make up a fair percentage of presenters. However, sexism is also very clearly apparent. How is it that women are participating in greater numbers in what seems like it’s a more sexist environment?
Well, it might not be more sexist in the States. It might just be a more open form of sexism. Scientific American just ran an article about benevolent sexism. When sexism seems ‘friendly’, women are more likely to accept it. They gave a hypothetical example:

How might this play out in a day-to-day context? Imagine that there’s an anti-female policy being brought to a vote, like a regulation that would make it easier for local businesses to fire pregnant women once they find out that they are expecting. If you are collecting signatures for a petition or trying to gather women to protest this policy and those women were recently exposed to a group of men making comments about the policy in question, it would be significantly easier to gain their support and vote down the policy if the men were commenting that pregnant women should be fired because they were dumb for getting pregnant in the first place. However, if they instead happened to mention that women are much more compassionate than men and make better stay-at-home parents as a result, these remarks might actually lead these women to be less likely to fight an objectively sexist policy.

So it might not be that British culture (and British people) are less sexist than Americans. They’re just more polite. And the result of this politeness is not that women feel more empowered. Quite the contrary, in fact. Because the sexism is less in-your-face, it’s more effective and participation by women is thus lowered.
Indeed, if men who mean well are making a big deal about how rare it is for women to get involved in something, this can accidentally slide into benevolent sexism. Which leaves us in something of a bind. For those of us who are men and do want to increase participation by women, what can we do about it? I would argue that one step is vigilant moderation, where all sexism, benevolent or openly hostile, is banished from online discussion. And we can refuse to participate in all-male events or panels. Some effort should probably also be extended in this direction for collaborations, projects and musical groups . . . there is probably some size at which it becomes problematic if everyone involved is a man. The growing pool of G-Hack alumnae will hopefully become part of the larger scene. And hopefully more women on stage will empower the women in the audience to start producing. And hopefully those of us men who want to make a big deal about it (‘and they’re pretty too!’), will get the message that this is not the way forward.

Kronos Quartet at the Proms

I’ll start with the lows

I’ve been really grumpy about music lately and the at the start of this concert, my heart sank and I thought my grumpiness would continue. My friends and I got the promenade tickets for the arena area of the Royal Albert Hall (which is laid out somewhat like the Globe theatre, such that people stand around the stage). I had reasoned that string quartets were intimate, so it was better to be close. In fact, the acoustic of the hall are such that even standing not that far from the stage, the only sound I could hear was from the speakers. I might as well have been up way above, at least then freed of the burdensome expectations of non-amplified sounds.
The sound seemed slightly off the whole evening. At first, I thought the group lacked intensity, but they certainly looked intense. Somehow, it just wasn’t getting off the stage, lost somewhere in the compression of the audio signal. Lost in the tape backing they had for nearly every piece? Which (can we talk about this?) seemed to be really naff most of the time. There also seemed to be subtle timing issues throughout a lot of the concert and sometimes it just sort of felt like the seams were showing.
Kronos was my favourite string quartet for a long time, largely due to their distinctive bowing, but also due to their willingness to take risks, defy genre, etc. Unfortunately, this has becoming more and more gimicky as of late. One of their pieces, a BBC commission (so it’s not entirely their fault), had a Simon toy in it. The cellist would do a round of it and then play back the pitches in time, along with the other string players who also copied it. Along with tape backing, of course. Some of which seemed to be samples of Radiophonic sounds. I thought I recognised a single bass twang of the Doctor Who theme and I hoped they would just play that rather then the piece they were actually slogging through.

The best bit

However, they also played Ben Johnston’s String Quartet No 4: Amazing Grace, which was the piece I was most looking forward to. I didn’t know the piece, but I know the composer. The piece’s setting is lush Americana – Copland-esque but in a twenty first century context. The piece has a lot of busy-ness in it. It’s Americana glimpsed through the windows of speeding trains and moving cars. America between facebook posts. Constant distraction, the theme fragmented and subsumed in the texture of life. At one point, the violins and viola are busily creating their densely fragmented texture, while barely audibly, the cellist was playing the noted from Amazing Grace on the overtones of the highest parts of his strings. The notes of the melody become metaphor for Grace itself. Something transcendental and beautiful is always going on, giving meaning to a jumbled whole, sometimes so subtly that it’s difficult to perceive. The occasional moments of thematic clarity thus reminded me of tragedy, as that’s when grace becomes most apparent and evident.
It was really really beautiful and I teared up a bit.

The Good

Sofia Gubaidulina’s String Quartet No 4 was well-played and my friend Irene especially considered it to be a highlight. It’s a very good piece, but I’m sure I’ve heard the work before and I think it came off a bit better on those previous performances.
I thought the Swedish folk song Tusen tankar was also a high point. The piece was short, unpretentious and well played.
In general, they seemed to warm up and get going over the course of the concert and if they had ended with the last piece on the program, I would have gone home and felt pretty happy about them, but then they played an encore.

The tape part

I like tape (by which I mean any fixed media, like CDs or whatever). I write tape music. I like it when ensembles play along with tape. Tape is great.
Tape music is also sound that doesn’t immediately come from an instrument. So if it’s playing really processed or artificial sounds, that’s perfect, because those sounds couldn’t easily come from an instrument. But when it’s just filling in for a backing band that nobody wanted to pay to hire…. it’s naff. It’s inexcusably naff.
If Kronos wanted to play an encore with a metal band or whatever, I would have thought it surprising and maybe slightly gimicky. But they played an encore with a tape of a rock band. A tape that at one point got really loud with synchronised lights, while the quartet kept sawing away an unchanging string accompaniment. At that point, they played backup to a tape and tried to make it seem ok with lighting tricks. A tape of a rock band, not any kind of acousmatic tape. A let’s-just-play-a-tape-it’s-cheaper.
The high point of the concert was fantastic, but the low point . . .. I give them a mixed review overall.

Composer Control

I am writing this on my phone, so please pardon any typos.

I’ve just gone to see a piece of music, which I won’t mention the name of here. It was an interesting idea and technically competent and well-rehearsed, but it fell a bit flat in performance. The best moment of it was a long pause in the middle. The conductor and performers froze and the audience held its breath, waiting. What would happen next? Was the piece over? Was it still going? I had a composer once tell me that pauses add drama and this was the first time I would agree with that pronouncement.

I had a look at the score afterwards and it had a bar of rest with a fermata over it (that means ‘hold this’) and a asterisk to a footnote that said to hold it much longer than seemed reasonable or necessary. Interestingly, and i would say not coincidentally, this did seem to be the only thing not precisely notated in the work. Everything else about the sound production had been pre-decided by the composer and the ensemble was carrying out his eaxcting instructions.

This does seem to be the dominant theme of 21st century music composition. Composers seem to want complete control over musical output. Some, like Ferneyhough with his total complexity, approach this at an ironic distance. They intentionally overnotate in a way they know is unplayabe, to produce a specific kind of stress in the performer. But more recently, the trend is to overnotate but remain playable with the sincere intention of getting exact performances every time. Or, at least, to control what elements are exactly repeatable and treat the freer parts as one might treat a random generator or a markov chain in a computer program.

I played very briefly in the Royal Improvising Orchestra in the Hague and I have very positive things to say about that experience and the other members of the group. However, the control thing was still evident and creeping in. They had borrowed from another a group a very large set of hand signs, designed so the conductor could tell the supposedly improvising players what to play. Indeed, with those hand signals in use, it was no longer accurate to say that the players were improvising. Instead, the conductor was and were mechanisms for carrying out his musical will. Fortunately, that was only a small aspect of our performance practice. When we were doing this, we all took turns conducting, so we got a tradeoff and still were improvisers, at least some of the time.

I mentioned above being treated as an aspect of a computer program and, indeed, I think that is the source of the current state of affairs. Many younger composers (I’m including myself in this group, so read “younger” as “under 50”) have become reliant on score notation programs and write music without being able to read it very well. With MIDI playback, it is possible to know what notes will sound like together even if you can’t read the chord or find the keys on the piano.

The major drawback on relying on MIDI renditions of our pieces is that they sound like MIDI – they are precise, robotic and unchanging. Pieces that are written to sound good for that kind of playback often don’t work very well with live ensembles. One solution to this dilemma seems to be to treat ensembles more like MIDI playback engines, rather than adapt our style of writing for real conditions. This is a failure of imagination.

Those who are pushing notation and musical ideas in new directions are not so naive as the above paragraph suggests, but we still have become accustomed to being able to control things very precisely. When I write a musical structure into a program, I know it will be followed exactly. when I want randomness, I have to specify it and parametrise it precisely as well. In the world of computer composition, adding randomness and flexibility is extra work.

For humans, it’s the exactness that’s extra work and one that has faint rewards for audiences and for performers. It sucks the life out of pieces. It makes performing dull and overly controlled. It is an unconscious adoption of totalitarian work practices, informed and normalised by the methods of working required for human computer interaction. The fact that most professional ensembles barely schedule any rehearsal time does not help with this phenomenon, as they do not tend to spend the time required to successfully interpret a piece, so we seek to spell it out for them exactly.

Composers would do well to step back and imagine liberating their performers, rather than constraining them. We would also do well by learning to read scores. Computers are fine tolls for writing, but could you imagine a playwright using text-to-speech tools in order to create a play? Imagine what that would do to theatre! I think that’s happening now to music.

But, as in today’s performance, the most magical moments in performance are the ones where performers are empowered. If you don’t think you can trust them, then you’ve picked the wrong performers or written the wrong piece. In the best musical performances, the emotional state of the performer is followed by the emotional state of the audience. Give them something worth following.

Engaging and Adjusting

The thing about negative feedback is that it’s extremely useful for knowing how to improve. (Mostly, not counting the guy who wondered if our mothers were proud (I’d like to think mine would be.)) And the topic that stands out most glaringly is audience engagement.
This is a long standing problem for many groups dating back to the start of the genre. Somebody left an anonymous comment on my last post comparing us to “geography teachers.” Scot Gresham-Lancaster wrote that The Hub was compared to air traffic controllers. Their solution was to project their chat window, something we’ve talked about, but never actually implemented. There are papers written about how the use of gestural controllers can bridge this gap, something we have implemented. But what projected chat, gestural control, and synthesised voice all have in common is hiding behind technology.
Thus far, we usually physically hide behind technology as well, sat behind tables, behind laptops and do not tend to talk to the audience. However, not all of our gigs have been this way. When we played at the Sonic Picnic, we were standing and we had a better connection to the audience, I think because we were behind plinths, which are smaller and thus we were more exposed. Other concerts, we’ve talked to the audience and even even have given them some control of our interface at certain events. This also helps.
Performers who have good posture and good engagement are not like that naturally; they practice it like all their other skills. A cellist in a conservatory practices in front of a mirror so ze can see how ze looks while ze plays and adjust accordingly.
Also, it turns out that it wasn’t just me that ‘crashed’ due to user error rather than technical failure. There’s two solutions for this – one is to have a todo list reminding the player what they need to do for every piece and to automate as much of that process as possible. The other is to be more calm and focussed going on stage. When we were getting increasingly nervous waiting to be called on to perform, we could have been taking deep breaths, reassuring each other and finding a point of focus, which is what happens when gigs go really well. Alas, this is not what we did at all.
So, starting next week, we are practising in front of a ‘mirror’ (actually a video projection of ourselves, which we can also watch afterwards to talk about what went right and wrong). We are going to source tall, plinth-like portable tables to stand behind or next to. The composer of every piece will write a short two sentence summary explaining the piece and then, in future, we’ll have microphones at future gigs, such that whoever has the fastest change will announce the piece, say a bit about it and have a few bad jokes like rock bands do between songs. We’re also going to take deep breaths before going on and have check lists to make sure we’re ready for stuff.
On the technical side, I’m going to change the networking code to broadcast to multiple ports, so if SuperCollider does crash and refuse to release the port, the user will not have to restart the computer, just the programme. Also, I’m hoping that 3.5.1 will have some increased stability on networking. My networked interactions tend to crash if left running for long periods of time, which is probably a memory management issue that I’ll attempt to find and fix, but in the mean time, we get everything but that running ahead of going on stage and then start the networking just before the piece and recompile it between pieces. To make the changeover faster, we’ve changed our practice such that who ever is ready to go first just starts and other people catch up, which is something we also need to practice.
A pile of negative feedback, even if uncomfortable, is a tremendous opportunity for improvement. So our last gig was amazingly useful even if not amazingly fun.

Press Release

Download PDF

Birmingham’s first Network Music Festival 27-29th January.

For immediate release: 24th January 2011

Birmingham’s first Network Music Festival presents hi-tech music performances from local and international artists.

On 27-29th January 2012 the first Network Music Festival will showcase some of the most innovative UK and international artists using networking technology. Presenting a broad spectrum of work from laptop bands, to live coding, to online collaborative improvisation, to modified radio networks, audio-visual opera and iPhone battles, Network Music Festival will be a weekend of exciting performances, installations, talks and workshops showcasing over 70 artists!

Network Music Festival are working alongside local organisations Friction Arts, SOUNDkitchen, BEAST, Ort Cafe and The Old Print Works and PST/Kismet in order to bring this new and innovative festival to Birmingham.

With 20 performances, 5 installations, 5 talks and a 2 day work Network Music Festival will be a vibrant and diverse festival presenting musical work where networking is central to the aesthetic, creation or performance practice. Acts include: Live-coding laptop quartet Benoit and the Mandelbrots (Germany); algorithmic music duo Wrongheaded (UK), transatlantic network band Glitch Lich (UK/USA) and home grown laptop bands BiLE (Birmingham Laptop Ensemble) and BEER (Birmingham Ensemble for Electroacoustic Research) as well as many more local, UK, European and international acts programmed from our OPEN CALL for performances, installations and talks.

If that’s not enough, we’ll be kicking off the festival early on Thursday 26th January with a pre-festival party programmed in collaboration with local sound-art collective SOUNDkitchen which showcases some of Birmingham best electronic acts, Freecode, Juneau Brothers and Lash Frenzy as well as one of SOUNDkitchen’s own sound installations.

There’s also an opportunity for you to get involved as we’re running a 2 day workshop on ‘Collaborative Live Coding Performance’ led by members of the first live coding band [PB_UP] (Powerbooks Unplugged).

“Birmingham has a reputation for being the birth place of new genres of music,” said festival organiser, Shelly Knotts. “We’re excited to be a part of this and to be bringing the relatively new genre of computer network based music to Brum. Some of these concerts are going to be epic!”

Tickets are available from www.brownpapertickets.com. Day and weekend passes available £5-£25. Workshop £20.

For more information visit our website: networkmusicfestival.org and follow us on twitter: @NetMusicFest. To tweet about the festival use the hashtag #NMF2012. We also have a facebook page: www.facebook.com/networkmusicfestival

Network Music Festival // 27-29th January 2012 // The Edge, 79-81 Cheapside, Birmingham, B12 0QH

Web:networkmusicfestival.org

Twitter: @NetMusicFest Hashtag: #NMF2012

Facebook: www.facebook.com/networkmusicfestival

Email: networkmusicfestival@gmail.com

On Friday will be a sneak preview of an excerpt from Act 2 of the Death of Stockhausen, the world’s first ‘laptopera.’

Some Ideas

The music of 40 years ago is more innovative, challenging and interesting than almost anything produced in the last decade. Like all of life, we have forgotten ideas and become focussed on technology. The future, as we see it is an indefinite sameness differing only by having shinier new gadgets.
Increasingly, the trend in electronic music performance is to see the player as an extension of the machine. Or tools are lifeless, sterile and largely pre-determined and thus so are we. We are becoming automatons in music and in life. Young composers, instead of challenging this narrowing of horizons are conforming to it. We are hopelessly square.
In order to look forwards, we must first look backwards, to a time when people believed change was possible.
Any social model maps relatively easily to a music model. Self-actualised individuals, to take an example, are improvisors who do not listen to each other. Humans as agency-lacking machines are drones, together performing the same musical task, like an orchestra, but robbed of diversity and subtlety. If the model does not work musically, it will not work socially and vice versa. The state of our music is the state of our imagination, the state of our soul and the state of our future.
A better world is possible, and we can begin to compose it.

Kinect and OSC Human Interface Devices

To make up for the boring title of this post, lets’s start off with a video:

XYZ with Kinect a video by celesteh on Flickr.

This is a sneak preview of the system I wrote to play XYZ by Shelly Knotts. Her score calls for every player to make a drone that’s controllable by x, y, and z parameters of a gestural controller. For my controller, I’m using a kinect.

I’m using a little c++ program based on OpenNi and NITE to find my hand position and then sending out OSC messages with those coordinates. I’ve written a class for OSCHIDs in SuperCollider, which will automatically scale the values for me, based on the largest and smallest inputs it’s seen so far. In an actual performance, I would need to calibrate it by waving my arms around a bit before starting to play.

You can see that I’m selecting myself in a drop down menu as I start using those x, y and z values. If this had been a real performance, other players names would have been there also and there is a mechanism wherein we duel for controls of each other’s sounds!

We’re doing a sneak preview of this piece on campus on wednesday, which I’m not allowed to invite the public to (something about file regulations) but the proper premiere will be at NIME in Oslo, on Tuesday 31st May @ 9.00pm atChateau Neuf (Street address: Slemdalsveien 15). More information about the performance is available via BiLE’s blog.

The SuperCollider Code

I’ve blogged about this earlier, but have since updated WiiOSCClient.sc to be more immediately useful to people working with TouchOSC or OSCeleton or other weird OSC devices. I’ve also generated several helpfiles!
OSCHID allows one to describe single OSC devices and define “slots” for them.
Those are called OscSlots and are meant to be quite a lot like GeneralHIDSlots, except that OSCHIDs and their slots do not call actions while they are calibrating.
The OSC WiiMote class that uses DarWiinRemote OSC is still called WiiOSCClient and, as far as I recall, has not changed its API since I last posted.
Note that except for people using smart devices like iPhones or whatever, OSC HIDs require helper apps to actually talk to the WiiMote or the kinect. Speaking of which…

The Kinect Code

Compiling / Installing

This code is, frankly, a complete mess and this should be considered pre-alpha. I’m only sharing it because I’m hoping somebody knows how to add support to change the tilt or how to package this as a proper Mac Application. And because I like to share. As far as I know, this code should be cross-platform, but I make no promises at all.
First, there are dependencies. You have to install a lot of crap: SensorKinect, OpenNi and NITE. Find instructions here or here.
Then you need to install the OSC library. Everybody normally uses packosc because it’s easy and stuff…. except it was segfaulting for me, so bugger that. Go install libOSC++.
Ok, now you can download my source code: OscHand.zip. (Isn’t that a clever name? Anyway…) Go to your NITE folder and look for a subfolder called Samples. You need to put this into that folder. Then, go to the terminal and get into the directory and type: make. God willing and the floodwaters don’t rise, it should compile and put an executable file into the ../Bin directory.
You need to invoke the program from the terminal, so cd over to Bin and type ./OscHand and it should work.

Using

This program needs an XML file which is lurking a few directories below in ../../Data/Sample-Tracking.xml. If you leave everything where it is in Bin, you don’t need to specify anything, but if you want to move stuff around, you need to provide the path to this XML file as the first argument on the command line.
The program generates some OSC messages which are /hand/x , /hand/y and /hand/z, all of which are followed by a single floating point number. It does not bundle things together because I couldn’t get oscpack to work, so this is what it is. By default, it sends these to port 57120, because that is the port I most want to use. Theoretically, if you give it a -p followed by a number for the second and third arguments, it will set to the port that you want. Because I have not made this as lovely as possible, you MUST specify the XML file path before you specify the port number. (As this is an easy fix, it’s high on my todo list, but it’s not happening this week.)
There are some keyboard options you can do in the window while the program is running. Typing s turns smoothing on or off. Unless you’re doing very small gestures, you probably want smoothing on.
If you want to adjust the tilt, you’re SOL, as I have been unable to solve this problem. If you also download libfreenect, you can write a little program to aim the thing, which you will then have to quit before you can use this program. Which is just awesome. There are some Processing sketches which can also be used for aiming.
You should be able to figure out how to use this in SuperCollider with the classes above, but here’s a wee bit of example code to get you started:




 k = OSCHID.new.spec_((
  ax: OscSlot(realtive, '/hand/x'),
  ay: OscSlot(realtive, '/hand/y'),
  az: OscSlot(realtive, '/hand/z')
  ));

 // wave your arms a bit to calibrate

 k.calibrate = false;

 k.setAction(ax, { |val|  val.value.postln});

And more teaser

You can see the GUIs of a few other BiLE Tools in the video at the top, including the Chat client and a shared stopwatch. There’s also a network API. I’m going to do a big code release in the fall, so stay tuned.

Strategies for using tuba in live solo computer music

I had the idea of live sampling my tuba for an upcoming gig. I’ve had this idea before but never used due to two major factors. The first is the difficulty of controlling a computer and a tuba at the same time. One obvious solution is foot pedals, which I’ve yet to explore and the other idea is a one-handed, freely moving controller such as the wiimote.
The other major issue with doing tuba live-sampling is sound quality. Most dynamic mics (including the SM57, which is the mic I own) make a tuba sound like either bass kazoo or a disturbingly flatulent sound. I did some tests with the zoom H4 positioned inside the bell and it appeared to sound ok, so I was going to do my gig this way and started working on my chops.
Unfortunately, the sound quality turns out not to be consistent. The mic is prone to distortion even when it seems not to be peaking. Low frequencies are especially like to contain distortion or a rattle which seems to be caused by the mic itself vibrating from the tuba.
There are a few possible work arounds. One is to embrace the distortion as an aesthetic choice and possible emphasise it through the use of further distortion fx such as clipping, dropping the bit rate or ring modulation. I did a trial of ring modulating a recorded buffer with another part of the same buffer. This was not successful as it created a sound lurking around the uncanny valley of bad brass sounds, however a more regular waveform may work better.
At the SuperCollider symposium at Wesleyan, I saw a tubist (I seem to recall it was Sam Pluta, but I could be mistaken) deliberately sampling tuba-based rattle. The performer put a cardboard box over the bell of the tuba. Attached to the box was a piezo buzzer in a plastic encasing. The composer put a ball bearing inside the plastic enclosure and attached it to the cardboard box. The vibration of the tuba shook the box which rattled the bearing. The piezo element recorded the bearing’s rattle, which roughly followed the amplitude of the tuba, along with other factors. I thought this was a very interesting way to record a sound caused by the tuba rather than the tuba itself.
Similarly, one could use the tuba signal for feature extraction, recognising that errors in miccing the tuba will be correlated with errors in the feature extraction. Two obvious thing to attempt to extract are pitch and amplitude, the latter being somewhat more error-resistant. I’ve described before an algorithm for time-domain frequency detection for tuba. As this method relies on RMS, it also calculates amplitude. Other interesting features may be findable via FFT-based analysis such as onset detection or spectral centroid, etc using the MLCD UGens. These features could be used to control the playing of pre-prepared sounds or live software synthesis. I have not yet experimented with this method.
Of course, a very obvious solution is to buy a better microphone. It may also be that the poor sound quality stemmed from my speakers, which are a bit small for low frequencies. The advantage of exploring other approaches include cost (although a tuba is not usually cheap either) and that cheaper solutions are often more durable or at least I’d be more willing to take cheaper gear to bar gigs (see previous note about tuba cost). As I have an interest in playing in bars and making my music accessible through ‘gigability,’ a bar-ready solution is most appealing.
Finally, the last obvious solution is to not interact with the tuba’s sounds at all, thus creating a piece for tuba and tape. This has less that can go wrong, but it looses quit a lot of spontaneity and requires a great deal of advance preparation. A related possibility is that the tubist control real-time processes via the wiimote or other controller. This would also require a great deal of advanced preparation – making the wiimote into it’s own instrument requires the performer to learn to play it and the tuba at the same time, which is rather a lot to ask, especially for an avant guarde tubist who is already dealing with more performance parameters (such as voice, etc) than a typical tubist. This approach also abandons the dream of a computer-extended tuba and loses whatever possibilities for integration exist with more interactive methods. However, a controller that can somehow be integrated into the act of tuba playing may work quite well. This could include sensors mounted directly on the horn such that, for example, squeezing something in a convenient location, extra buttons near valves, etc.
I’m bummed that I won’t be playing tuba on thursday, but I will have something that’s 20 minutes long and involves tuba by September

First BiLE Performance

BiLE, the Birmingham Laptop Ensemble, had it’s first gig on Thursday, just six or eight weeks after being formed. We played at the Hare and Hounds in Birmingham, which is a well-known venue for rock bands, as a part of the Sound Kitchen series. There were two pieces on the bill, one called 15 Minutes for BiLE by BiLE member Jorge Garcia Moncada and we did a cover of Stucknote by Scot Gresham-Lancaster, which was a piece played by The Hub.
As a first performance, I thought it went rather well. There were the usual issues where everything sounds completely different on stage and the few minutes of sound checking does not give anybody enough time to get used to the monitor speakers. And time moves completely differently in front of an audience, where suddenly every minute gets much longer. But there were also the performing-with-a-computer issues: computers get terrible stage fright and are much more prone to crash. A few people did have their sound engines crash, so the first piece had a high pitched squeal for a few minutes, while messages flew on the chat window, reminding people to be quiet during the quiet parts. Actually, there was quite a lot of panic in the chat window and I wish I’d kept a log of it. (Later the audience said we all looked panicked from time to time. I always look panicked on stage, but it’s not cool.) In the second piece, I forgot to tell my programme to commence sound-making for a bout the first three minutes. I haven’t heard the recording yet, but I bet things sounded ok. Considering that most of us had never done live laptop performance at all before and how quickly we went from our first planning meeting to our first gig, I think we got a good result.
Jorge’s piece was complicated but Stucknote seems deceptively simple, so we did not try running through it until the day before the gig. In retrospect, this was clearly an error, because the piece, like all structured improvisation, does require some practice to get the flow down. Of course, we’d all spent the requisite time working on our sound generation and I’d coded up some faders for me and the other SuperCollider user, with Ron Kuivila’s Conductor quark, which is a very quick and dirty was of making useful GUIs. I’d tried out my part at home and it worked well and the sound I got was interesting, so I felt confident in it until I got to the practice and it crashed very quickly. I restarted SuperCollider and it crashed again. And again. And again. Half the time, it brought down the other SC user’s computer also. And it was clobbering the network, causing the MAX users a bunch of error messages and a few moments of network congestion. MAX, usefully, just throws away network messages when there are too many of them, whereas SC does not seem to.
I could not figure out where the bug was and so, after the practice, I sat down to sort it out. And there was no sign of it. Everything was fine again.
Fortunately, this provided enough of a clue that I was able to figure out that I had created an infinite loop between the two SuperCollider programmes. When I moved a slider in the GUI, that sent a message to the network which effected the sound on the target machine and also caused Shelly’s programme to update the GUI. However, the Conductor class always informs listeners when it’s updated, no matter who updated it or how, so it sent a message back to the network informing everybody of it’s new value, which caused my GUI to update, which sent a message to the network, ad infintum until I crashed.
I came up with a fix using a flag and semaphores:

                   Task({
                             semaphore.wait;
                             should_call_action = false;
                             cv = con[contag];
                             cv.input = input;
                             should_call_action = true; 
                             semaphore.signal;
                     }).play;
 

While this fix mostly works, it does bring up some interesting questions about data management across this kind of network. If we’re all updating the data at once, is there a master copy of it somewhere? Who owns the master copy if one exists? In this case, as one person is making sound from it, that person would seem to be the owner of the data. But what if we were all sharing and using the sliders? Then we all own it and may all have different ideas of what it might actually be.
I’m writing a class for managing shared resources which holds a value and notifies listeners when it changes. The object that’s changing it passes itself along to the method, so when listeners are notified, the changer is not. I haven’t finished the class yet, so I don’t have sample code, but I’m pondering some related issues.
Like, should there be a client version of this class for a local copy held on the local machine and a master version for the canonical copy on the network that everybody else is updating? Should a master copy of some data advertise itself on the network via the API and automatically listen for updates? Should they specify a way to scale values so it can also accepted changed inputs from 0-1 and scale them appropriately? If it does accept inputs/values in a specified range, should there be a switch for the clients to automagically build a GUI containing sliders for every master variable on the network? I think that would be quite cool, but I may not have time to code it soon, as our next gig, where we’ll be playing a piece of mine, is coming up very soon on 29 of April and then there’s a gig in May and then I suspect probably one in June and one in July (although not scheduled yet) and in August, we’re going to NIME in Oslo, which is very exciting. Bright days ahead.