Richard Hoadley: Implementation and Development of Interfaces for Music Generation and Performance though Analysis of Improvised Movement and Dance

Still liveblogging the sc symposium. This speaker is now my colleague at Anglia Ruskin. He also did a poster presentation on this at AES, iirc

Small devices, easily portable. Appearance and design effect how people interact. Dancers not so different than regular people.
He makes little arduino-powered boxes with proximity detectors. This is not new tech, but is just gaining popularity due to low cost and ease of use.
He’s got a picture up called “gaggle” which has a bunch of ultrasonic sensors. The day before the event at which is was demonstrated, the developers were asked if they wanted to collaborate with dancers. (It’s sort of theremin-esque. There was actually a theremin dance troupe, back in the day and I wonder if their movements looked similar?) The dancers in the video were improvising and not choreographed. They found the device easy to improvise with. Entirely wireless access for them lets them move freely.
How do sounds map to those movement? How nice are the sounds for the interactors (the dancers)?
Now a video of somebody trying the thing out. (I can say from experience that the device is fun to play with).
He’s showing a picture of a larger version that cannot be packed on an airplane and plans to build even bigger versions. HE’s also showing a version with knobs and buttons – and is uncertain whether those features are a good or bad idea.
He also has something where you touch wires, called “wired”. Measures human capacitance. You have to be grounded for it to work. (Is this connected electrically to a laptop?) (He says, “it’s very simple.” and then supercollider crashed at that instant.)
The ultrasound things is called “gaggle” and he’s showing the sc code. The maximum range of the sensor is 3 metres. In the gui he wrote allows for calibration of the device. How far away is the user going to be. How dramatic will the response be to a given amount of movement?
You can use it to trigger a process when something is in range, so it doesn’t need to react dumbly. There is a calibration for “sudden”, which responds to fast, dramatic movements. (This is a really great example of how very much data you can get from a single sensor, using deltas and the like.)
Once you get the delta, average that.
Showing a video of dancers waving around podium things like you see in art museums.
Now a video of contact dancing with the podiums. There’s a guy with a laptop in the corner of the stage. It does seem to work well, although not as musically dramatically when the dancers do normal dancey stuff without waving their arms over the devices, which actually looks oddly worshipful in a worrying way.
Question: do dancers become players, like bassoonists or whatever? He thinks not because the interactivity is somewhat opaque. Also, violinists practice for years to control only a very few parameters, so it would take the dancers a long time to become players. He sees this as empowering dancers to further express themselves.
Dan Stowell wants to know what the presenter was doing on stage behind the dancers? He was altering the parameters with the GUI to calibrate to what the dancers are doing. A later version uses proximity sensors to control the calibration of other proximity sensors, instead of using the mouse.
Question: could calibration be automated? Probably, but it’s hard.

Daniel Mayer: miSCellaneoud lib

still liveblogging the SC symposium
His libs. VarGui: multi-slider gui. HS (HelpSynth) HSPar and related.
LFO-like control fo synths, generated by Pbinds
Can be discrete or continuous – a perceptual thing in the interval size.
Discrete control can be moved towards continuous by shortening the control interval.

Overview

Can do direct LFO control. Pbind-generated synths that read from or write to control busses.
Or you can do new values per event, which is language only or put synth values in a Pbind.

Pbind generated synths

Write a synthdef that reads from a bus. Write a synth that writes to a bus. Make a bus. Make a Pbind

Pbind(
 instrument, A1,
 dor, 0.5,
 pitchBus, c
)

Ok, w his lib, make a sequence of durations. Starts the synths. Get the values at the intervals with defined latency. The values are sent back to the language, which has more latency. Then you have a bunch of values that you can use. If you play audio with it, there is yet another layer of latency.

h = HS(s, {/* usegn graph*/});

p = PHS(h, [], 0.15 [ /*usual Pbind def*/ ]).play

. . .
p.stop; // just stops the PHS
p.stop(true); // also stops the HS

or

// normal synth
..
.
.

(
p = PHS(h, [], 0.2, [/* pdind list*/]).play(c, q)

PHS is a PHelp Synth *new (helpSynth, helpSynthArgs, dur1, pbdindData1 . . .durN, pbindDataN)
PHSuse has a clock
PHSpar switches between two patterns.
(I do not understand why you would do this instead of just use a Pbind? Apparently, a this is widely used, so I assume there exists a compelling reason.)
download it from http://www.daniel-mayer.at
Ah, apparently, the advantage is that you can easily connect ugens to patterns, as input sources w the s.getSharedContol(0)

Dan Stowell and Alex Shaw: SuperCollider and Android

Still live blogging the Sc symposium
their subtitle: “kickass sound, open platform.”
Android is an open platform for phones, curated by google. Linux w java. it’s not normal linux, though. it’s NOT APPLE.
phones are computers these days. they’re well-connected and have a million sensors, w microphones and speakers. Andriods multitask, it’s more open, libraries and APKs are sharable.
Downsides is that it’s less mature and has some performance issues w audio.
sccynth on android. The audio engine can be put in all kinds of places. So the server has been ported. The lang has not yet been ported. So to use it, you could write a java app and use it as an audio engine. Can control remotely, or control it from another android app. ScalaCollider, for example.
Alex is an android developer. Every android app has an “activity” which is an app thingee on the desktop. Also has services, which is like a daemon, which is deployed as part of an app and persists in the background. An intent is a loosely-coupled message. AIDL is Android Interface Definition Language, in which a service says what kinds of messages it understands. The OS will handle the binding
things you can do w supercollider on android: write cool apps that do audio. making instruments, for example. He’s playing a demo of an app that says “satan” and is apparently addictive. You can write reactive music players (yay). Since you can multitask, you can keep running this as you text people or whatever.
what languages to use? sclang to pre-prepare synthdefs, OSC and java for the UI.
A quick demo! Create an activity in Eclipse!
Create a new project. Pick a target w/ a lower number for increased interoperability. Music create an activity to have a UI. SDK version 4.Associate project w/ supercollider, by telling it to use it as a library. There are some icon collisions, so we’ll use the SC ones. Now open the automatically generated file. Add SCAudio object. When the activity is created, initialise the object.

 
public void onCreate 
 . . .
superCollider = new SCAudio("/data/data/com.hello.world/lib");
superCollider.start;
superCollider.sendMEssage(OscMEssage.createSynthMessage("default", 1000, 1, 0); // default synth
…
}

 . . .

@Override
public void onPause(){
 super.onPause();
supercollider.sendQuit();
}

Send it to the phone and holy crap that worked.
Beware of audio latency, 50 milliseconds. multitasking also.
Ron Kuivila wants to know if there are provisions for other kinds of hardware IO, kind of like the arduino. Something called bluesmurf is a possible client
Getting to the add store, just upload some stuff, fill out a form and it’s there. No curation.

Tim Blechman: Parallelising SuperCollider

Still live blogging the SC symposium
Single processors are not getting faster, so most development is going for multicore architectures. But, most computer music systems are sequential.
How to parallelise? Pipelining! Split the algorithm into stages. This introduces delay ad stuff goes from one processor to the other. Doesn’t scale well. Each stage would need to have around the same computational cost, also.
You could split blocks into smaller chunks. Pipeline must be filled and them emptied, which is a limit. Not all processors can be working all the time.
SuperCollider has special limitations in that OSC commands come at the control rate and the synth graph changes at that time. Thus no pipelining across control rate blocks. Also, there are small block sizes.
For automatic parallelisation, you have to do do dependency analysis. However, there are implicit dependencies with busses. The synth engine doesn’t know which resources are accesses by a synths. This can even depend on other synths. Resources can be accessed at audio rate. Very hard to tell dependencies ahead of time. Automatic parallelisation for supercollider might be impossible. You can do it with CSound because their instrument graphs are way more limited and the compiler knows what resources each one will be accessing. They just duplicate stuff when it seems like they might need it on both. This results in almost no speedup.
The goals for SC are to not change the language and to be real time safe. Pipelining is not going to work and automatic parallelisation is not feasible. So the solution is to parallelise not automatically and let the user sort it out. So try parallel groups.
Groups with no node ordering constraint, so they can be executed in parallel.
easy to use and understand and compatible with the existing group architecture. doesn’t break existing code. You can mix parallel groups with non-parallel ones.
the problems is that the user needs to figure stuff out and make sure it’s correct. Each node has two dependency relations. There is a node before every parallel group and a node afterwards.
This is not always optimal. Satellite nodes can be set to run before or after another node, so 2 new add actions.
There is an example that shows how this is cool. It could be optimised, so that some nodes have higher precedence.

Semantics

Satellite nodes are ordered in relation w one other node
Each node can have multiple satellite predecessors and satellite successors. They may have their own satellite nodes. They can be addressed by the parent group of their reference node. Their lifetime should relate to the lifetime of their reference node.
This is good because it increases the parallelism and is easier, but it more complicated.
Completely rewritten scsynth w a multiprocessor aware synthesis engine. Has good support for parallel groups, working on support for satellite nodes. Loads only slightly patches Ugens. Tested on linux, w more than 20 concerts. Compiles on OS X, might work. We’ll see. (linux is the future)
supernova is designed for low latency, real time. dependency graph representation has higher overhead. There’s a few microsecond delay.
For resource consistency, spinlocks have been added. Reading the same resource from parallel synths is safe. Writing may be safe. Out.ar is safe. Replace.ar might not be. The infrastructure is already part of the svn trunk.
(I’m wondering if this makes writing UGens harder?)
A graph of benchmarks for supernova. Scales well. Now a graph of average case speedup. W big synths speedup is nearly 4.
Proposed extensions: parallel groups, satellite nodes. Supernova is cool.
There is an article about tis on teh interweb, part of his MA thesis.
Scott Wilson wants to know about dependencies in satellite nodes. All of them have dependencies. Also wants to know if you need parallel nodes if you have satellite nodes. Answer: you need both.

Nick Collins: Acousmatic

continuing live blogging the SC symposium
He’s written Anti-aliasing Oscillators: BlitB3Saw – BLIT derived sawtooth. Twice as efficient as the current band-limited sawtooth. There’s a bunch of Ugens in the pack. The delay lines are good, apparently

Auditory Modelling plugin pack – Meddis models choclear implants.(!)
Try out something called Impromptu, which is a good programming environment for audio-visual programming. You can re-write ugens on the fly.(!)

Kling Klang

(If Nick Collins ever decided to be an evil genius, the world would be in trouble)

{ SinOsc.ar* ClangUgen.ar(SoundIn.ar)}.play

The Clang Ugen is undefined. He’s got a thing that opens a C editor window. He can write the Ugen and then run it. Maybe, I think. His demo has just crashed.
Ok, so you can edit a C file and load it into SC without recompiling, etc. Useful for livecoding gigs, if you’re scarily smart, or for debugging sorts of things.

Auto acousmatic

Automatic generation of electroacoustic works. Integrate machine listening into composition process. Algorithmic processes are used by electroacoustic composers, so take that as far as possible. Also involves studying the design cycle of pieces.
the setup requires knowing the output number of channels the duration and some input samples.
In bottom up construction, sources files are analysed to find interesting bits, those parts are processed and the used again as input. The output files are scattered across the work. Uses onset detection, finding dominant frequency, excluding silence, other machine listening ugens.
Generative effect processing like granulations.
top down construction imposes musical form. Cross-synthesis options for this.this needs to run in non-real time, since this will take a lot of processing. There’s a lot of server-> language communication, done w/ Logger currently.
How to evaluate the output: Tell people that it’s not machine composed and play it for people, and then ask how they like it. It’s been entering electroacoustic compositions. Need to know the normal probability of rejection. He normally gets rejected 36% of the time (he’s doing better than me).
He’s sending things he hasn’t listened to, to avoid cherry picking.
Example work: fibbermegibbet20
A self-analysing critic is a hard problem for machine listening
this is only a prototype. The real evil plan to put us all out of business is coming soon.
The example work is 55 seconds long ABA form. The program has rules for section overlap to create a sense of drama. It has a database of gestures. The rules are contained in a bunch of SC classes, based on his personal preferences. Will there be presets, i.e., sound like Birmingham? Maybe.
Scott Wilson is hoping this forces people to stop writing electroacoustic works. Phrased as “forces people to think about other things.” He sees it as intelligent batch processing.
The version he rendered during the talk is 60 seconds long, completely different than the other one and certainly adequate as an acousmatic work.
Will this be the end of acousmatic composing? We can only hope.

Live blogging the Supercollider Symposium:: Hannes Hoezl: Sounds, Spaces, Listening

Maifesta, “European Nomad Art Biennale” takes places in European non-capital cities every 2 years. The next is in Murcia, Span, 2010
No 7 was in 2008 in italy, in 4 locations.
(This talk is having technical issues and it wounds like somebody is drilling the ceiling.)
The locations are along Hannibal’s route with the elephants. Napoleon went through there? It used to be part of the Austrian empire. The locals were not into Napoleon and launched a resistance against him. The “farmer’s army” defeated the French 3 times.
(I think this presentation might also be an artwork. I don’t understand what is going on.)
Every year, the locals light a fire in the shape of a cross on the mountain, commemorating their victories.
The passages were narrow and steep and the local dropped stones on the army, engaging in “site specific” tactics. One of the narrowest spots was Fortezza, which was also a site for manifesta. There is a fortress there, built afterwards, the blocks the entire passage. There is now a lake beside there, created by Mussolini for hydroelectric power. The fortress takes up 1 square kilometre.
there is a very long subterranean tunnel connecting the 3 parts of the fort.
(He has now switched something off and the noise has greatly decreased)
The fortress was built after the 1809 shock. But nobody has ever attacked it. There was military there until 2002. They used it to hold weapons. The border doesn’t need to be gaurded anymore.
during ww2, it held the gold reserves from the Bank of Rome
The manifesta was the first major civilian use. None of the nearby villages had previously been allowed to access the space.
The other 3 manifesta locations were real cities. Each had their own curatorial team. They collaborated on the fortress
The fortress’ exhibition’s theme was imaginary scenarios, because that’s basically the story of the never-attacked fort.
The fortress has a bunch of rooms around the perimeter, with cannons in them, designed to get the smoke out very quickly.
We live our lives in highly designed spaces, where architects have made up a bunch of scenarios on how the space will be used and then design it to accommodate that purpose.
the exhibition was “immaterial” using recordings, texts, light
There were 10 text contributors. A team did the readings and recordings. Poets, theatre writers, etc.
The sound installations were for active listening, movement, site specific.
He wanted to do small listening stations where a very few people can hear the text clearly, as there are unlikely to be crowds and the space was acoustically weird. The installations needed to have text intelligibility. They needed to be in english, italian and german, thus there were 30 recordings.
The sound artist involved focusses on sound and space. The dramatic team focusses on the user experience design.
(Now he’s showing a video os setting up a megaphone in a cannon window. It is a consonant cannon. Filters the consonants of one of the texts and just plays the clicks. He was playing this behind him during the first part of the talk, which explains some of the strange noises. In one of the rooms, they buried the speakers in the dirt floor/ In another room, they did a tin can telephone sort of thing with transducers attached to string. Another room has the speakers in the chairs. Another had transducers on hanging plexiglass. The last one they had the sound along a corridor, where there was a speaker in every office, so the sound moved from one to the next.

Ardour: Copying Gain Envelopes

Ok, let’s say you’ve got a project in Ardour and you’ve carefully drawn a bunch of gain changes using the Draw Gain Automation tool – which is one of the buttons on the upper left. You listen to your project and are forced to conclude that one of your tracks needs to be re-recorded or re-rendered. Alas and woe! However, there is a way to get your automation points onto the new track. Alas, it’s a bit tricky.
One of the great advantages of Ardour over other DAWs is that you can actually figure out what’s going on with the data files. If you open up a ProTools project in a text editor, you get gibberish, but if you open up a .ardour file, you get a human-readable XML file. I bring this up because you cannot select your gain change points in the Ardour GUI and move them to a new track. But you can move them if you’re willing to modify your .ardour file. Here’s how:

  1. Make a backup of the file in case something goes wrong.
  2. Open the file in the text editor of your choice – ideally one that you might use to write code
  3. Your tracks have names. Let’s say the track you want to copy is called “SourceTrack.” Search in the .ardour file for “SourceTrack.” You’ll find it many times, but one of those times, will have an XML node called <Envelope> a couple of lines below.
  4. Copy everything starting at <Envelope> and ending at </Envelope&gt, including those two lines.
  5. Ok, let’s say the track you want to copy to is called “DestinationTrack.” Search for that. If you drew some gain automation points on it already, look for the <Envelope> below it. If you have not drawn any gain automation, then look for <Envelope default=”yes”/>
  6. Blow away the <Envelope default=”yes”/> or the pre-existing envelope with the code you copied.
  7. The length of the envelope must match the length of the region. You can find the region’s length 2 or 3 lines above the envelope. It will say “length=” and then a number. Get that number and copy it.
  8. The envelope values are pairs of durations and amplitudes. If the length of DestinationTrack is longer than SoureTrack, then add a point at the end with the length you just copied. If it’s shorter, remove points with durations past the length you just copied. Then add a point with the length you copied.
  9. Scroll up to the very top of the file. The second line will end with “id-counter=” and then a number. Copy that number.
  10. Now replace the the number at id-counter with the number you copied +1. If you copied “123,” then replace it with “124.”
  11. Scroll back down to the envelop your just added to DestinationTrack. It has a property “id=” and then a number. Replace the number there with the one you copied from the top of the file. If the one at the top of the file was “123,” then you should have “id=123” in the Envelope of DestinationTrack and “id-counter=124” at the top of the file.
  12. Save the file and then open it with Ardour to see if it worked.

Well, it’s easier than re-drawing every point, but it’s still a bit of a pain. If you think you might end up wanting to cut and paste automations, then you can start by using tha gain automation track instead of drawing gain enveloped directly on top of the audio. Click the ‘a’ button to show automation tracks. I prefer to draw directly over the waveform so I can really see what’s going on, but I must admit that transferring the points is a real pain.
Thanks to las, who was in the IRC channel on freenode and was able to tell me how to do this.

How I would write a London Cycle Hire App

Phase 1: Map

Resizable map with little circles for hire points. The colour varies from yellow to blue. Yellow for many spaces and blue for many bikes. If the hire point is full or empty, the icon changes to an X.

Phase 2: Timing

A timer, that can optionally sound an alarm when you get to 25 minutes. It could be told to find the closest point to your current location. It should also be able to time a five minute break between hiring cycles, if you’re trying to avoid fees and track how much money you’ve spent if you go longer than half an hour.

Phase 3: Route Planning

Pre-compute routes between every possible pair of hire points (store this information on a website someplace and check for an update every few months). When a user asks for a route between two addresses or whatever, figure out the coordinates of where they’re coming from and going, find the closest hire points to those points, download walking instructions from google to connect those points to the actual destinations. It should switch to further away hire points if there’s a problem with available bikes or spaces.

Phase 4: Tracking conditions

If you destination hire point has filled up and you’re pretty close to it, it should re-compute the route from your current position to the hire point with spots available that’s closest to your destination and get new walking instructions from that point. It should also get directions from your original destination hire point to the new one, in case you don’t notice that anything has happened. It should alert you that it’s got a new route for you.

I’m pondering writing an n900 app

But I’m really busy lazy, so if somebody else beats me to it, I’m cool with that. The Maep program is a demonstration of some map apis, so it can be used to provide the map functionality, along with the tfl’s bike API.
I would use Cycle Streets for the route planning. Then you can give users options of whether they want quiet or fast routes. Plus it’s a cool service. The TFL can also provide cycling directions, which are more likely to use posted bike routes.
I’ve been riding the Boris bikes quite frequently, especially for short trips. If it’s 20 minutes to walk or 10 minutes to walk to a hire point, grab a cycle, etc, then I’ll go for the cycle. For those kinds of trips, especially, it would be nice to have a map on my phone, because there’s a risk, if all the hire points are filling up with bikes, that the closest place to park might be the very I would really like it if this could somehow be integrated into Mappero, which is a fantastic map/navigation app, but I suspect it’s beyond me. Unless there’s a plugin api what’s well documented. I’d also think it was cool is Mappero could just download OSM data directly and do it’s own rendering. That would be awesome.

New Passport Due Soon

I have just returned home from the US Embassy in London. In three weeks, a new passport with correct information will arrive by post. Huzzah!

Always be Prepared

This is a culmination of a much longer process. In May, I changed my name via statutory declaration. Then I contacted my phone companies to get them to change their records. I brought a copy of the form to my GP’s surgery. Most Brits change their name via deed poll, as it’s cheaper and easier, so the receptionist had never seen a statutory declaration before and was reluctant to accept it, but eventually did so. Then I went to my bank, who I hate with the fire of 999 suns, and they refused to let me change my name on my account at all, unless I could also provide photo ID in the new name. And finally, I went to my university, who updated my student records and ID card and printed out a letter affirming that I am a student there.
Then, I had to wait for phone bills to arrive in my name and to call BT more than once. And finally, appointment letters from the hospital where I had top surgery provided the final documents. So I then had three types of paperwork with my new name on it.
Shortly after I began compiling paperwork to change my name, the US State Department changed their rules about gender markers on passports. The letter I was planning on asking my surgeon for would no longer count. However, a letter from my GP would suffice. I asked him to write one saying I had completed transition, as then I could get a full term passport instead of a two year one. And, indeed, under the terms of the new regulations, I have completed transition. I find this to be entirely reasonable, as nobody would mistake me for a woman if they saw me or talked to me and the state of the parts of me covered by clothes are nobody’s business but those in who’s company I choose to disrobe.
My GP wrote the letter and charged me £25 for it. When the surgery’s receptionist asked me to pay, I was initially surprised, but then went to a bank and got some cash. GP practices are privately owned and the money they get from the NHS doesn’t cover things like letters to foreign governments. If this had been a problem for me, I think I could have gotten the Charing X psychiatrists to write a letter for me. That would also be acceptable to the embassy, but it’s over a month until I even see them again.
Armed with all of this paperwork, I made an appointment to go to the embassy, as you can’t just turn up. I began to fill out the application forms. They wanted to know if I had ever been married and what was the date of that and what was the date of my divorce. The divorce date, I remember. The date of the marriage? Not so much. I went back reading through old blog posts, seeing if I could figure it out. The ceremony was on day, but a paperwork snafu meant we got the license on the following monday . . . finally, I made a guess. And then I remembered the Defence of Marriage Act.
Every country in the world considers me to be legally divorced, except for my home country, where they hold that I was never married at all. When I was in Holland, I had to get a certificate to say I wasn’t currently married, so I have US Government-issued documentation that says I’m divorced, but they don’t actually back that statement. The state of California, however, also considers me to be divorced, as do five other states. It’s a strange sort of feeling, the one of non-recognition. The marriage may not have felt real, but the divorce certainly did. All those documents and lawyers fees and bitter acrimony never actually happened according to the great country of my birth. Obama said something about overturning that law, the one that says that years of my life weren’t real, but he didn’t actually mean it.

Today

The embassy makes people queue outside, on the pavement. Fortunately, the weather was sunny. I waited for a while with non-US citizens and then got into the correct queue. I knew one of the security guards from when I played in the gay band. She came over to chat and then came back to let me skip ahead of the queue. I appreciated the gesture and it made me a lot less nervous, actually. ID checks and pat downs make me nervous, for obvious reasons. She was cool. I did feel a bit guilty about queue-jumping though. I hadn’t brought my phone, although I could have and they would have held it for me. They took my USB stick and my Boris Bike fob.
The architecture of the US embassy is somewhat reminiscent of the Lincoln Center in New York. It’s sort of brutalist concrete, but with a lot of decorative corrugation. Inside the waiting room, there are gold-coloured metal columns. I wish I’d got a picture of it, but, of course, cameras are not allowed. I also wish I’d gotten a picture of the sign that sad to beware of terrorist bombs. “If you suspect something, call 999.” it said in small print at the bottom. I sat in one of the several rows of chairs and waited to be called.
Over the course of the last week or so, I’ve had an email correspondence with an embassy worker who was not very informed about new State Department rules. She seemed to think a surgery letter was still required. She asked for a “background statement,” something left undefined. When I asked for more information, I was instructed to ask for a particular staff member when I got there. So when I was called to the window, I asked to speak with the staff member with whom I had been having emails. She was a posh woman, apparently the manager. She told me to go to a window in a private chamber – a room with a door, however the walls don’t go up all the way to the ceiling, creating an illusion of privacy where none exists.
She started to ask about my medical history. Had I fully transitioned surgically? It seemed as if she was trying to be delicate while enquiring about the state of my genitals. In fact, the State Department has no right to any information beyond the letter written by my GP. What operations I have or have not had are none of their business. I explained that under the new rules, the letter from my GP should suffice. She said that the letter was only good for a two year passport.
The hassle of a two year passport isn’t just that I would need to return to the embassy every 18 months. The UK will not issue me a visa that extends longer than my passport, so my plans to get a two year work visa after graduating would become much more of a bother. Also, I would have to produce a new GP letter every time and appear in person with it. Otherwise, I would revert back to my initial state.
I argued that this is not what the new rules said and I was certainly not going to disclose information to which she had no right. I had the distinct impression that she just wanted me to declare that I’d had surgery, not actually provide documentation of it. I refused to budge, but the strength of my principles was somewhat undermined by the fact that one of the documents I brought to demonstrate that I’d changed my name specifically mentioned “mastectomy for transgender.” She called me up later to say that document would do nicely, but they were going to have to write to the States for guidance on the new rules. I hope they are provided with ample clarification. Indeed, plastic surgeons are not even on the list of doctors allowed to provide documentation.
The first time I spoke with her, she noted that she had seen a lot of this sort of thing before, and I certainly wouldn’t be the last. Perhaps she was trying to appear professional, but it was more of a knowingness, like she was an anthropologist and I was an exotic subject of study, about which she might one day write a book. So despite, apparently, being entirely successful in my mission to change my name and gender on my passport, I was still fairly wound up when I left. I rode a Boris Bike home, pedalling away my annoyance. Mostly.

Politics and FOSS: Open to who and when?

I was recently doing some reading towards writing a paper that touched on the politics and philosophy of FOSS. That stands for “Free and Open Source Software.” That doesn’t mean free as in “no charge,” although that is often also true. It’s “Free as in Freedom,” according to those that follow Stallman [1]. FOSS software belongs to the community of people that use and write it.
It’s about sharing. You give away what you write and you give away your knowledge of how to use. Communities of users form, giving each other support and helping each other with the software. It’s very easy to see this in idealist terms, and I wanted to write a paper about how progressive we all were. I was reading a paper by Olga Goriunova that analysed FOSS from a Marxist perspective. And then again from a feminist perspective. And then again from a Deluzian point of view. [2] FOSS began to look like a Rorschach blot of politics.
Indeed when some of the major players in the movement, such as Raymond, are right-libertarians [3, 4] and others are anti-captialist, then obviously it resists this kind of simple political reading.This was at the back of my mind this afternoon when, looking for distraction, I logged into the Greater London Linux Users Group channel on Freenode.
Freende is an IRC server, so this was a real-time chat, established so that people in the London area can talk about Linux; maybe network or get some help with a problem. Instead, I wandered in to a conversation where the participants were bemoaning the “wrong” kind of people having babies, by which, they meant poor people. One of the participants was talking about how a particular 14 year old girl, known to him personally, was a “slapper.” (*) The conversation turned to how forced sterilisation of poor people would be a good idea. “[W]e keep coming to this conclusion, birth controll [sic] in the water in all council estates” suggested a user called hali. [5]
Meanwhile, bastubis, a woman from a working class background logged in and became upset about the content of the conversation. Bastubis noted she “lived on a council estate as a child.” A few lines later hali said, “the fact the chavs(**) get pregnant in the first place is usually a misstake [sic].” Bastubis explained that she was “a chav with an education – you’re talking about me.” Another user, dick_turpin, chimed in shortly thereafter with, “Enforced sterilisation I say.” Bastubis quickly became frustrated and left. [5]
Dick_turpin cheered her departure with a “Huzzah!”, while hali celebrated with a “muahaha.” [5]
Their exercise of privilege to create a hostile environment for some users is clearly not accidental. If they were unconsciously expressing privilege, that would not have been followed with a “huzzah.” Given that the conversation started with both gender and class based slurs, it seem likely that their desire to exclude bastubis from the group had roots both in class and gender. As such, their intention was specifically to replicate privilege found offline and institute online to create an homogenous environment.
That privilege is expressed online as much as offline should not be surprising. FOSS communities are diverse and organised around geographical regions and or interests and sometimes identity, such as women or LGBT users. Therefore, some groups will tend to allow unchecked privilege, while others will tend to frown upon it or specifically disallow it. Simon Yuill writes that OpenLab, another London-based community centred on FOSS, specifically grew out out of a progressive squatter-based movement. Hacklabs such as OpenLab, “have provided a clear political and ethical orientation in contrast to the somewhat confused and contradictory political and social perspectives articulated in the other communities and contexts of the wider FOSS world.” [6] When OpenLab’s mailing list recently had a discussion about how to get more women involved, there were certainly moments of frustration, but the apparent intention was inclusion.
How is it that FOSS can create some communities that would seem to be progressive and others that would seem to want to preserve privilege over any other goal? I think my error is looking at it as a political movement. A lot of its spokespeople speak of it in a political manner, but given the widely divergent viewpoints, there is no inherent or unifying left or right ideology of FOSS. It’s infrastructure. It has value to many groups of people because it avoids duplication of effort and grants them access to resources. For some groups, the fact that it also grants resources to other users is a necessary sacrifice – one that can be mitigated through hostility to undesirable participants. For other groups, the sharing is a main focal point. FOSS, itself, is political like music is political, with as many readings and intentions.

*A derogatory slang term used for sexual promiscuous females.
** A derogatory slang term used for poor people

[1] Free as in Freedom
[2] Goriunova, Olga, “Autocreativity: The Operation of Codes of Freedom in Art and Culture”. FLOSS+Art (eBook) Ed. Aymeric Mansoux and Marloes de Valk. 2008.
[3] Raymond, Eric S, “I am an active Libertarian” 2003. Assessed 18 August 2010.
[4] Raymond, Eric S, Whatever happened to civil rights? 2003. Assessed 18 August 2010.
[5] #GLLUG ON FREENODE ON THE 18TH OF AUG 2010 IRC log
[6] Yuill, Simon, “All Problems of Notation Will be Solved by the Masses: Free Open Form Performance, Free/Libre Open Source Software, and Distributive Practice”. FLOSS+Art (eBook) Ed. Aymeric Mansoux and Marloes de Valk. 2008.