Dissertation: BiLE Networking White Paper

This document describes the networking
infrastructure in use by BiLE.

The goal of the infrastructure design
has been flexibility for real time changes in sharing network data
and calling remote methods for users of languages like supercollider.
While this flexibility is somewhat lost to users of inflexible
languages like MAX, they, nevertheless, can benefit from having a
structure for data sharing.


Network
Models


If there is a good reason, for
example, a remote user, we support OSCGroups as a means of sharing
data.

If all users are located together on
the same subnet, then we use broadcast on port 57120.

OSC
Prefix


By convention, all OSC messages start
with ‘/bile/’

Data
Restrictions


Strings must all be ASCII. Non ASCII
characters will be ignored.

Establishing
Communication

Identity
ID
Upon joining the network, users
should announce their identity:

/bile/API/ID
nickname ipaddress port

nicknames must be ASCII only.

Example:

/bile/API/ID
Nick 192.168.1.66 57120

Note that because broadcast
echoes back, users may see their own ID arrive as an announcement.

IDQuery

Users should also send out their
ID in response to an IDQuery:

/bile/API/IDQery

Users can send this message at
any time, in order to compile a list of everyone on the network.

API
Query
Users can enquire what methods
they can remotely invoke and what data they can request.

/bile/API/Query

In
reply to this, users should send /bile/API/Key and /bile/API/Shared
(see below)

Key
Keys represent remote methods.
The user should report their accessible methods in response to a
Query

/bile/API/Key
symbol desc nickname

The symbol is an OSC message
that the user is listening for.
The desc is a text based
description of what this message does. It should include a usage
example.
The nickname is the name of the
user that accepts this message.

Example

/bile/API/Key
/bile/msg "For chatting. Usage: msg, nick, text" Nick
Shared
Shared represents available
data streams. Sources may include input devices, control data sent
to running audio processes or analysis. The user should report their
shared data response to a Query
/bile/API/Shared
symbol desc
The symbol is an OSC message
that the user sends with. The format of this should be

/bile/nickname/symbol
The desc is a text based
description of the data. If the range is not between 0-1, it should
mention this.
The nickname is the name of the
user that accepts this message.

Example
/bile/API/Shared
/bile/Nick/freq "Frequency. Not scaled."

Listening
RegisterListener
Shared data will not be sent out if no one has requested
it and it may be sent either directly to interested users or to the
entire group, at the sender’s discretion. In order to ensure
receiving the data stream, a user must register as a listener.
/bile/API/registerListener
symbol nickname ip port
The symbol is an OSC message
that the user will listening for. It should correspond with a
previously advertised shared item. If the receiver of this message
recognises their own nickname in in the symbol (which is formatted
/bile/nickname/symbol),
they should return an error:
/bileAPI/Error/noSuchSymbol

The nickname is the name of the
user that will accept the symbol as a message.
The ip is the ip address of the
user that will accept the symbol as a message.
The port is the port of the
user that will accept the symbol as a message.
Example
/bile/API/registerListener
/bile/Nick/freq Shelly 192.168.1.67 57120

Error

noSuchSymbol
In the case that a user receives a request to register a
listener or to remove a listener for data that they are not sharing,
they can reply with

/bile/API/Error/noSuchSymbol
OSCsymbol
The symbol is an OSC message
that the user tried to start or stop listening to. It is formatted
/bile/nickname/symbol.
Users should not reply with an error unless they recognise their own
nickname as the middle element of the OSC message. This message may
be sent directly to the confused user.

Example

/bile/API/Error/noSuchSymbol
/bile/Nick/freq
De-listening
RemoveListener
To announce an intention to ignore subsequent data, a
user can ask to be removed.
/bile/API/removeListener
symbol nickname ip
The symbol is an OSC message
that the user will no longer be listening for. If the receiver of
this message sees their nickname in the symbol which is formatted

/bile/nickname/symbol),
they can reply with /bile/API/Error/noSuchSymbol
symbol
The nickname is the name of the
user that will no longer accept the symbol as a message.
The ip is the ip address of the
user that will no longer accept the symbol as a message.
Example
/bile/API/removeListener
/bile/Nick/freq Shelly 192.168.1.67
RemoveAll

Users who are quitting the network can asked to be
removed from everything that they were listening to.
/bile/API/removeAll
nickname ip

The nickname is the name of the
user that will no longer accept any shared data.
The ip is the ip address of the
user that will no longer accept any shared data.
Example
/bile/API/removeAll
Nick 192.168.1.66

Commonly
Used Messages

Chatting
Msg
This is used for chatting.
/bile/msg
nickname text
The nickname is the name of the
user who is sending the message.
The text is the text that the
user wishes to send to the group.

Clock
This is for a shared stopwatch and not for serious
timing applications
Clock start or
stop

/bile/clock/clock
symbol
The symbol is either start or
stop.
Reset

Reset the clock to zero.
/bile/clock/reset
Set
Set the clock time
/bile/clock/set
minutes seconds
Minutes is the number of minutes past zero.

Seconds is the number of seconds past zero.


Proposed
Additions

Because users can silently join, leave
and re-join the network, it could be a good idea to have users time
out after a period of silence, maybe around 30 seconds or so. To
stay active, they would need to send I’m-still-here messages.

There should possibly also be a way
for a user to announce that they have just arrived, so, for example,
if a SuperCollider user recompiles, her connection will think of
itself as new and other users will need to delete or recreate
connections depending on that user.

Dissertation Draft: BLE Tech

In January 2011, five of my colleagues in BEAST and I founded BiLE, the Birmingham Laptop Ensemble. All of the founding members are electroacoustic composers, most of whom have at least some experience with an audio programming language, either SuperCollider or MAX. We decided that our sound would be strongest if every player took responsibility for their own sound and did his or her own audio programming. This is similar to the model used by the Huddersfield Experimental Laptop Orchestra (HELO) who describe their approach as a “Do-It-Yourself (DIY) laptop instrument design paradigm.” (Hewitt p 1 http://helo.ablelemon.co.uk/lib/exe/fetch.php/materials/helo-laptop-ensemble-incubator.pdf) Hewitt et al write that they “[embrace] a lack of hardware uniformity as a strength” and implies their software diversity is similarly a strength and grants them greater musical, (rather than technical) focus. (ibid) BiLE started with similar goals – focus on the music and empower the user, and has had similar positive results.

My inspiration, however, was largely drawn from The Hub, the first laptop band, some members of which were my teachers at Mills College in Oakland California. I saw them perform in the mid 1990s, while I was still an undergrad and had an opportunity then to speak with them about their music. I remember John Bischoff telling me that they did their own sound creation patches, although for complicated network infrastructure, like the Points of Presence Concert in 1987, Chris Brown wrote the networking code. (Cite comments from class?)

One of the first pieces in BiLE’s repertoire was a Hub piece, Stucknote by Scott Gresham Lancaster. This piece not only requires every user to create their own sound, but also has several network interactions including a shared stopwatch, sending chat messages and the sharing of gestural data for every sound. In Bischoff and Brown’s paper, the score for Stucknote is described as follows:

“Stuck Note” was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one “note”, meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an “x-factor”, which is a controller that in some way changes the timbral character or continuity of the instrument. Every player’s two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players’ computer synthesizers, taking over their sounds with two simple control streams. Like in “Wheelies”, this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of “Stuck Note” was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly. (http://crossfade.walkerart.org/brownbischoff/hub_texts/stucknote.html)

Because BiLE was a mostly inexperienced group, even the “easy to implement for everyone” Stucknote presented some serious technical hurdles. We were all able to create the sounds needed for the piece, but the networking required was a challenge. Because we have software diversity, there was no pre-existing SuperCollider Quark or MAX external to solve our networking problems. Instead, we decided to use the more generic music networking protocol Open Sound Control (OSC). I created a template for our OSC messages. In addition to the gestural data for amplitude and x-factor, specified in the score, I thought there was a lot of potential for remote method invocation and wanted a structure that could work with live coding, should that situation ever arise. I wrote a white paper (see attached) which specifies message formatting and messages for users to identify themselves on the network and advertise remotely invokable functions and shared data.

When a user first joins the network, she advertises her existence with her username, her IP address and the port she is using. Then, she asks for other users to identify themselves, so they broadcast the same kind of message. Thus, every user should be aware of every other user. However, there is currently no structure for users to quit the network. There is an assumption, instead, that the network only lasts as long as each piece. SuperCollider users, for example, tend to re-compile between pieces.

Users can also register a function on the network, specifying a OSC message that will invoke it. They advertise these functions to other users. In addition, they can share data with the network. For example, with Stucknote, everyone is sharing amplitude values such that they are controllable by anyone, including two people at the same time. The person who is using the amplitude data to control sound can be thought of as the owner of the data, however, they or anyone else can broadcast a new value for their amplitude. Typically, this kind of shared data is gestural and used to control sound creation directly. There may be cases where different users are in disagreement about the current value or packets may get lost. This does not tend to cause a problem. With gestural data, not every packet is important and packet loss is not a serious issue.

When a user puts shared data on the network, she also advertises it. Users can request to be told of all advertised data and functions. Typically, a user would request functions and shared data after asking for ids, upon joining the network. She may ask again at any time. Interested users can register as listeners of shared data. The possibility exists, (currently unused), for the owner of the data to send its value out on to registered users instead of the network as a whole.

In order to implement the network protocol, I created a SuperCollider class called NetAPI (see attached code and help file). It handles OSC communications and the infrastructure of advertising and requesting ids, shared functions and shared data. In order to handle notifications for shared data changes, I wrote a class called SharedResource. When writing the code for Stucknote, I had problems with infinite loops with change notifications. The SharedResource class has listeners and actions, but the value setting method also takes an additional argument specifying what is setting it. The setting object will not have it’s action called. So, for example, if the change came from the GUI, the SharedResource will notify all listeners except for the GUI. When SharedResources “mount” the NetAPI class, they become shared gestural data, as described above.

Forming a plan

Today is 30 July. My dissertation is due on 30 September. I am now planning on how things will be between now and then.
I know that I cannot work every day between now and then. My maximum sprint time is 10 days. So I need to plan on taking one day off per week, which might was well be on the weekend. With BiLE on Wednesdays, that gives me 5 days a week. Also, planning on working 16 hour days is also not going to work. Instead, I can do 4 hours on music and 4 hours on words. Roughly, I have 160 hours of each to spend.
If I keep to a reasonable sleeping schedule and cut back on facebook, I can still go out occasionally. I am not going to drink unless it is the evening before my one break day per week. Also, since stress levels will be high, that break day needs to actually be spent away from a computer like riding my bike or going to the beach or something worthwhile.
Everything is going to be fine. This will all be over soon. I will get it it all done. I just need to focus and work hard.
I may start doing again what I did with my MA and start posting drafts of various bits, looking for feedback.

Concert Review: RCM LOrk

Last night, I went to see the Royal College of Music Laptop Orchestra perform in their institution’s main hall. I found out about the concert at the last minute because a friend spotted it on twitter. Until yesterday, I didn’t even know there was a LOrk in in London!
The audience was quite small and out numbered by the performers. There were 6 people on stage and one guy working at a mixing desk, who got up to play piano for one of the pieces. The programme was quite short, with 5 pieces on it. They started with Drone by Dan Trueman, which was the first ever LOrk composition, according to the printed programme. They walked in from the back, carrying laptops and playing from the internal speakers. The tilt of the laptop changes the sound. They then walked around the space, making this drone. It worked well as an introduction and had a good performative element, but I find this piece disturbing in general because it pains me slightly whenever I see anyone shake a laptop. This kind of treatment leads disks to die. Somebody should port this piece to PD and run it via RjDj on an iPhone.
The next piece they played was Something Completely Different by Charles Mauleverer. It was quite short and was made up of clips from Monty Python. Somebody from the ensemble explained that they were playing YouTube videos directly and using the number keys to skip around in the videos and stutter and glitch in that way. This piece was played through two large monitors on the stage. Because all the clips are in the vocal range, using only two speakers made it a bit muddy. Also, the lack of processing the sounds in any meaningful way could become an issue, but the piece was quite short and therefore mostly avoided the limitations of it’s simple implementation.
Then, alas, there was a few minutes pause for technical issues and a member of the group stood up and gave a short talk about what was going on in the pieces played so far.
After they got everything going again, they played Synchronicity by Ellis Pecen, which was very well done. The players were given already processed sounds of a guitar and were playing and possibly modifying those further. The programme notes said it used instrumental sounds “process[ed] to such a degree that it would be difficult to discern the original instrument and the listener would … perceive” the source materials only as “a source of sound.” As such it was acousmatic in it’s construction and it’s ideals but the result was a nice drone/ambient piece. After a few minutes, the sound guy got up and joined the ensemble to play some ambient piano sounds. The result was a piece outside of the normal LOrk genre (as fas as one can be said to exist) and was extremely musical.
Spirala by David Rees, the next piece on the programme, was supposed to have a projected element, but the projector crashed just as the piece was about to start. The piece was apparently built in flash and involved the players turning some sort of crank, by drawing circles on their trackpads. the sounds it made (and perhaps the mental image of crank-turning) lead me to think of a jack in the box. The programme says the piece is online, but I’m getting a 404 on it, alas.
The last piece was Sisal Red by Tim Yates. It relied on network communication, making groups of three laptops into “distributed instruments.” The piece didn’t seem to match it’s programme notes, however, as there only seemed to be four people actually playing laptops. One of the players was on a keyboard controller and another one was playing the gong with a beater and a microphone as if it were Mikrophonie by Stockhausen. This piece used 4 channels of sound, with the two monitors on stage and the two behind the audience. It seemed to fill up the hall as if were were swimming in sound. I’m not sure what sounds were computer generated and what were from the gong or other sources, but I had the impression that the gong sound was swaying around us and was a very strong part of the piece. It certainly harkened back to the practice of putting instruments with electronics and also seemed to be an expansion of the normal LOrk genre. The result was very musical.
According to the programme, this is the only LOrk situated at a conservatory rather than a university. The players were all post graduates, which is also a break with the normal American practice of undergraduate ensembles. All of the pieces except the first one were written by ensemble members. As is the case with most other LOrks, the composer also supplied the “instrument,” so all the players were running particular programmes as specified by (or written by) the composer. Aside from the first piece, there were no gestural controllers present.
I think putting a LOrk into a conservatory is an especially good idea. This will create LOrks that will concentrate heavily on performance practice. In their piece Something Completely Different, they completely de-emphasised the technology and created something that was almost purely performative. However, they obviously still embrace the technical, not only through their choice of medium, but in pieces such as Spirala which required the composer to code in flash.
I was really impressed by the concert overall and especially their musicality and hope they get larger audiences at their future gigs, as they certainly deserve them.
By the way, if you’re in a LOrk and have not done so already, there is a mailing list for LOrks, Laptop Bands, Laptop Ensembles and any group computer performance: LiGroCoP, which you should join. Please use it to announce your gigs! Also, BiLE will be using it to make announcements regarding our Network Music Festival, which will happen early next year and will have some open calls.

Some Ideas

The music of 40 years ago is more innovative, challenging and interesting than almost anything produced in the last decade. Like all of life, we have forgotten ideas and become focussed on technology. The future, as we see it is an indefinite sameness differing only by having shinier new gadgets.
Increasingly, the trend in electronic music performance is to see the player as an extension of the machine. Or tools are lifeless, sterile and largely pre-determined and thus so are we. We are becoming automatons in music and in life. Young composers, instead of challenging this narrowing of horizons are conforming to it. We are hopelessly square.
In order to look forwards, we must first look backwards, to a time when people believed change was possible.
Any social model maps relatively easily to a music model. Self-actualised individuals, to take an example, are improvisors who do not listen to each other. Humans as agency-lacking machines are drones, together performing the same musical task, like an orchestra, but robbed of diversity and subtlety. If the model does not work musically, it will not work socially and vice versa. The state of our music is the state of our imagination, the state of our soul and the state of our future.
A better world is possible, and we can begin to compose it.

Why I Identify as Transgender

There’s been a spate of blog posts recently about how the word “transgender” is dead and we all need to decamp to a new term. And then there are posts arguing to opposite point. I’m not going to bother linking to any of them, but I am going to offer my 2p.
First of all, I’ve noticed that almost all of these posts about whether the word “transgender” is good or bad are coming from trans women, but none that I’ve noticed have come from trans men. The trans women who are against the term transgender seem to call themselves “transsexual” instead. I suspect that the reason for this is a desire to separate themselves from cross dressers and specifically from fetishists. Some straight men get a sexual kick from dressing like women. There is no parallel situation for trans men. While a surprising number of drag kings are straight, there is no visibile community and no stereotype of straight women dressing up like men for illicit fetish sexy fun time (alas).
It’s quite reasonable to want to de-link your gender identity from being seen as a fetish. However, I don’t think emphasising the term “transsexual” is the way to do this. First of all, it has the word “sex” in it. This makes a lot of people uncomfortable. This makes me uncomfortable. I almost never identify as TS. I don’t want to describe myself in a way that invokes sex or genitals.
I also really don’t want to invoke medical intervention, when disclosing conversationally or whatever, and especially not in a human rights campaign. Now, of course trans people should have rights to transition-related healthcare. But our other rights should in no way be linked to that. I don’t want my job or housing rights to have anything to do with what surgeries I’ve had or am planning to have. Indeed, this can, itself, create a human rights issue, in which some governments require sterilisation as a prerequisite to proper gender recognition and/or civil rights protections. That’s deeply problematic.
Furthermore, there are problems related to privilege. This is much less an issue in the UK, as the NHS does offer appropriate healthcare to trans people. But in the US and developing countries, medical transition can be economically out of reach for a lot of trans people. Thus, any limitation to those who are medically transitioning is a hugely problematic assertion of class privilege.
The rights of people who don’t want to medically transition are also hugely important. I spent many years as an obviously gender non-conforming person and I didn’t want to face discrimination then any more than I do now. People who are full or part time cross dressers or whatever, still deserve to have full rights to access education, housing and employment and enjoy the same full civil rights as cis people. The same issues that effect people with no plan to medically transition also effect people who are planning on medically transitioning and haven’t started yet and people who may not be passing all the time. Again, linking rights to medical procedures seems deeply dubious and may pressure people into having interventions that they don’t want or need and leaves out people who cannot afford the costs associated with those procedures.
And did I mention that a word with “sex” right in the middle of it makes people feel uncomfortable? No centrist political candidate in the US is ever going to give a speech about how we need to protect the rights of transsexuals. They may be persuaded to give a speech protecting the rights of transgender people, but they’re not going to want to say the word “sex” in this context. And, if we don’t want to be lumped in with fetishists, we don’t want to say the word “sex” either.
Those who think that we can get more rights by sacrificing those who don’t medically transition need some serious help with the concept of solidarity. It’s sort of amusing that some of the same people complain whenever trans protections are stripped out of laws that were originally conceived to protect all LGBT people.
So I’m sticking with the word transgender. People who hear it know what it means (or can figure it out quickly enough. It’s a word I’m comfortable with. It implies solidarity. People can, of course, self-identify however they want and that’s fine, but I think it’s too soon to say the word “transgender” is done.

The head of St Vitalis of Assisi

Alas, I’ve missed the auction of the head of St Vitalis of Assisi, which I guess is just as well as it was expected to go for at least £700. Still, I kind of feel like my entire life as an RC might have been heading for that purchase. I’ve gone on saint-head related pilgrimages and generally have a fascination with relics….
As I see it, the major problem with having a first class relic like this one is where to put it. St Vitalis is the patron saint of STIs and it doesn’t seem fair to keep such an obviously useful saint to oneself. The owner of the head really ought to build a chapel for it. As I don’t have any kind of space for such a construction, the head would be doubly beyond my means.
Indeed, as I live in a two room flat that’s already a bit overly full of stuff, storing the head until I could build a chapel would present a major problem.
I really don’t want a holy relic on display in my bedroom. A skull of any saint looking down on my bed would be a bit of a mood killer. I can’t decide if this particular saint would be better worse than other saints. On the one hand, he is kind of appropriate, if you don’t mind his dead, judging eye sockets. But on the other hand, do I want to send the message to overnight visitors that supernatural help is required in addition to the normal precautions?
I think he could also be distracting in the living room. Alas, I don’t even have room for him in my living room. It’s already stuffed to the gills with rather too much furniture, two tubas, a bass amp and a synthesiser. I have no idea where I could even find space for a head.
He may have died in 1370, but the kitchen seems unhygenic even for a very old and holy skull. And the bathroom is humid, which might lead to corruption of the sort saints are supposed to be spared. A mouldy relic would not be very nice.
This leaves the toilet, which in some ways is the ideal space. I have unoccupied space on top of the cistern, where he could gaze down upon possibly afflicted areas as guests wee. It also gives the faithful a private place where they can take a moment to determine if the saint’s prayers might be helpful before invoking them, and/or possibly calling their local GUM clinic. On the other hand, it does seem somewhat disrespectful to the saint to perch his head in a loo.
(American readers of the linked BBC article should note that in British English, an “outhouse” is a kind of a shed. In American English, an outhouse is a privy. So moving from an outbuilding to a toilet would be a reduction in his circumstances.)
Alas, I’ve been unable to discover ho bought the head, how much they paid or what their plans are. Do I want to know? I’m not sure.

Backstage at a BiLE gig

We played yesterday in Wolverhampton and I thought it went rather well. While we’re playing, we have a chat window open, so we can do some communication with each other. This is what went on in chat during our last piece:

Norah> :(
Les> reme
Les>  why is norah sad?
Shelly> :(?
Norah> someone crashed?
Antonio> Antonio crashed
Norah> oh :(
Shelly>  ack
Les>  bummer
jorge> ohh sheeet
Antonio> next?
Norah> Les note!
chris> my wiimote is boken
chris> ok ill start
Antonio> cool
chris> ready?
Les>  i am now
Norah> bang
Shelly>  huh? firebell starts?
jorge> yes
chris> im clock
jorge> purrfect
Les>  go?
Shelly>  ack brb. start without me
Antonio> go go go
Shelly> bk
Shelly>  ...test... 
Norah> hi
Les> we need a better beater for that bell 
Shelly>  jorge can i have the spoon?
Les>  eye contact!!
Shelly>  chirs can u pass the small bell this way? 
Les>  sounding good, norah
Shelly>  sounding GREAT! 
Norah> thanks
Antonio> everything is crashing for me :(
Shelly>  norah, ur patch sounds really coo1
Norah> it's being very magical today!
Shelly>  GRANULATINGGGGGGGGGGGGGGGGGGGG BILE!
Norah> WOW
Norah> excellent transition guys
Shelly>  i dont know what time it is by the way
Les>  10
Norah> 10:58
Norah> let's start winding down?
Les>  10:15?
Norah> 11:17
Les>  10:35
Les>  nice
Shelly>  NIIIIIIIIIIIIIIIIIIIIIIIIIICCCCCCCCCCCCEEEEEEEEEEEEEE!!!!!!!!!!!!!!!!!
Antonio> :)
Norah> that was super!
Antonio> is ther eone more?
chris> sh*t that was amazing!
jorge> super!!
Antonio> !!!
Shelly>  nope!
Antonio> super fun times
Shelly>  suppersuppersupper
Antonio> what's next?

Strategies for using tuba in live solo computer music

I had the idea of live sampling my tuba for an upcoming gig. I’ve had this idea before but never used due to two major factors. The first is the difficulty of controlling a computer and a tuba at the same time. One obvious solution is foot pedals, which I’ve yet to explore and the other idea is a one-handed, freely moving controller such as the wiimote.
The other major issue with doing tuba live-sampling is sound quality. Most dynamic mics (including the SM57, which is the mic I own) make a tuba sound like either bass kazoo or a disturbingly flatulent sound. I did some tests with the zoom H4 positioned inside the bell and it appeared to sound ok, so I was going to do my gig this way and started working on my chops.
Unfortunately, the sound quality turns out not to be consistent. The mic is prone to distortion even when it seems not to be peaking. Low frequencies are especially like to contain distortion or a rattle which seems to be caused by the mic itself vibrating from the tuba.
There are a few possible work arounds. One is to embrace the distortion as an aesthetic choice and possible emphasise it through the use of further distortion fx such as clipping, dropping the bit rate or ring modulation. I did a trial of ring modulating a recorded buffer with another part of the same buffer. This was not successful as it created a sound lurking around the uncanny valley of bad brass sounds, however a more regular waveform may work better.
At the SuperCollider symposium at Wesleyan, I saw a tubist (I seem to recall it was Sam Pluta, but I could be mistaken) deliberately sampling tuba-based rattle. The performer put a cardboard box over the bell of the tuba. Attached to the box was a piezo buzzer in a plastic encasing. The composer put a ball bearing inside the plastic enclosure and attached it to the cardboard box. The vibration of the tuba shook the box which rattled the bearing. The piezo element recorded the bearing’s rattle, which roughly followed the amplitude of the tuba, along with other factors. I thought this was a very interesting way to record a sound caused by the tuba rather than the tuba itself.
Similarly, one could use the tuba signal for feature extraction, recognising that errors in miccing the tuba will be correlated with errors in the feature extraction. Two obvious thing to attempt to extract are pitch and amplitude, the latter being somewhat more error-resistant. I’ve described before an algorithm for time-domain frequency detection for tuba. As this method relies on RMS, it also calculates amplitude. Other interesting features may be findable via FFT-based analysis such as onset detection or spectral centroid, etc using the MLCD UGens. These features could be used to control the playing of pre-prepared sounds or live software synthesis. I have not yet experimented with this method.
Of course, a very obvious solution is to buy a better microphone. It may also be that the poor sound quality stemmed from my speakers, which are a bit small for low frequencies. The advantage of exploring other approaches include cost (although a tuba is not usually cheap either) and that cheaper solutions are often more durable or at least I’d be more willing to take cheaper gear to bar gigs (see previous note about tuba cost). As I have an interest in playing in bars and making my music accessible through ‘gigability,’ a bar-ready solution is most appealing.
Finally, the last obvious solution is to not interact with the tuba’s sounds at all, thus creating a piece for tuba and tape. This has less that can go wrong, but it looses quit a lot of spontaneity and requires a great deal of advance preparation. A related possibility is that the tubist control real-time processes via the wiimote or other controller. This would also require a great deal of advanced preparation – making the wiimote into it’s own instrument requires the performer to learn to play it and the tuba at the same time, which is rather a lot to ask, especially for an avant guarde tubist who is already dealing with more performance parameters (such as voice, etc) than a typical tubist. This approach also abandons the dream of a computer-extended tuba and loses whatever possibilities for integration exist with more interactive methods. However, a controller that can somehow be integrated into the act of tuba playing may work quite well. This could include sensors mounted directly on the horn such that, for example, squeezing something in a convenient location, extra buttons near valves, etc.
I’m bummed that I won’t be playing tuba on thursday, but I will have something that’s 20 minutes long and involves tuba by September

WiiOSCClient.sc

Because there are problems with the wiimote support in SuperCollider, I wrote a class for talking to Darwiin OSC. This class has the same methods as the official wiimote classes, so, should those ever get fixed, you can just switch to them with minimal impact on your code.
Because this class takes an OSC stream from a controller and treats it like input from a joystick, this code may potentially be useful to people using TouchOSC on their iPhones.
There is no helpfile, but there is some usage information at the bottom of the file:


 // First, you create a new instance of WiiOSCClient, 
 // which starts in calibration mode
 
 
 w = WiiOSCClient.new;

 // If you have not already done so, open up DarwiinRemote OSC and get it talking to your wii.
 // Then go to preferences of that application and set the OSC port to the language port
 // Of SuperCollider.  You will see a message in the post window telling you what port
 // that is .... or you will see a lot of min and max messages, which lets you know it's
 // already callibrating
 
 // move your wiimote about as if you were playing it.  It will scale it's output accordingly
 
 
 // now that you're done callibrating, turn callibration mode off
 
 w.calibrate = false;
 
 // The WiiOSCClient is set up to behave very much like a HID client and is furthermore
 // designed for drop-in-place compatibility if anybody ever sorts out the WiiMote code
 // that SuperCollider pretends to support.
 
 // To get at a particular aspect of the data, you set an action per slot:
 
 w.setAction(ax, {|val|
  
  val.value; // is the scaled data from ax - the X axis of the accelerometre.
  // It should be between 0-1, scaled according to how you waved your arms during
  // the callibration period
 });
 
 
 
 // You can use a WiiRamp to provide some lag
 (
  r = WiiRamp (20, 200, 15);
 
  w.setAction(ax, {|val|
   var scaled, lagged;
  
   scaled = ((val.value * 2) - 1).abs;
   lagged = r.next(scaled);
  
   // now do somehting with lagged
  });
 )

Calibration

this class is self-calibrating. It scales the wiimote input against the largest and smallest numbers that it’s seen thus far. While calibration is set to true, it does not call any of its action methods, as it assumes the calibrated numbers are bogus. After to set calibration to false, it does start calling the actions, but it still changes the scale if it sees a bigger or smaller number than previously.

WiiRamp

The WiiRamp class attempts to deal with the oddness of using accelerometers, but it does not just do a differentiation, as that would be too easy. The accelerometers give you major peaks and valleys, all centred around a middle, so just using the raw values often is a bit boring. In the example, you see that we scale the incoming data first: ((val.value * 2) – 1) changes the data range from 0 to 1 into -1 to 1. The puts the centre on 0. Then, because we care more about the height of peaks and depth of valleys than we care about whether they’re positive or negative, we take the absolute value, moving the scale back to 0 to 1.
When you shake your wiimote, the ramp keeps track of your largest gesture. It takes N steps to reach that max (updating if a larger max is found before it gets there), then holds at the number for M steps and then scoots back down towards the current input level. You can change those rates with upslope, hold and downslope.

OscSlot

This class is the one that might be useful to iPhone users. It creates an OSCResponderNode and then calls an action function when it gets something. It also optionally sends data to a Bus and has JIT support with a .kr method. It is modelled after some of the HID code. It also supports callibration. How to deploy it with TouchOSC is an exercise left to the reader.
http://www.berkeleynoise.com/celesteh/code/WiiOSCClient.sc