Post-Post Digital Failure: Tuba as HCI Glitch

Kim Cascone wrote in his seminal paper, The Aesthetics of Failure, that “the medium is no longer the message in glitch music: the tool has become the message.” He identifies glitch music as emerging “from the ‘failure’ of digital technology . . . bugs, application errors, system crashes, clipping, aliasing, distortion, quantization noise, and even the noise floor of computer sound cards are the raw materials . . ..” He identifies this as “post-digital”[1].

Vanhanen, also writing about the origins of glitch as “the unintentional sounds of a supposedly silent medium.” This is “the result of a two-way relationship between hardware/software and the producer (mis)using it.”[2] Shelly Knotts writes about live-code driven failure as an “inevitability of imperfection.” Error does not generally result from an intentional misuse but is a constant possibility which extends to our entire environment. Live code errors make the audience and practitioners aware of the “imperfection of technical systems” even as these systems surround us and we rely on them, making it potentially a critique of liberal technocracy and capitalism more generally. [3]

However, all of these writers place all of the sound within the machine. Knotts notes that while “a jazz musician [might] suffer a broken string or reed in a performance, its unlikely their entire instrument will collapse.” [3] However George Lewis makes associations between 1980s computer music, which often did involve setting up systems in front of the audience, and (free / jazz) improvisation.

Lewis wrote that the computer music scene of the 1980s in the San Francisco Bay Area “was also widely viewed as providing possibilities for itinerant social formations that could challenge institutional authority and power.” This music was played in a band setting, and formed an improvisational practice, “from a collaborative rather than an instrumental standpoint, negotiating with their machines rather than fully controlling them.” (Unfortunately, this exciting beginning lead to not only to live coding, but also to “ubiquitous computing, which lives on as IoT.)[4]

Lewis himself also experimented with the borders of failure with systems coupled with his trombone. When I was at Sonology in 2005-6, Clarence Barlow described a theatre piece that I believe he attributed to Lewis. Lewis was on stage with his trombone and an effects box, but the effects box was not working. A tech came to assist, then another, then another, until a team of engineers disassembled the entire effects unit. While this was happening, Lewis sat down to eat his dinner. The piece ended when the box was completely disassembled. Unfortunately, I can’t find a reference to this piece, although searching for it did lead me to the writing mentioned above.

Lewis’s piece would more traditionally be classed as theatre rather than glitch. At a stretch, one could claim the effects box is misused. The technicians collaborate on “fixing” the box, and the piece becomes a ritual of debugging, a communal, music-not-making practice. In taking a dinner break, Lewis reflects on how tech outages cause work stoppages. In his piece, the tech “failure” causes the entire piece to “collapse”. His trombone is silent.

Domifare is also a brass piece that intentionally incorporates technical error. But, unlike the “post-digital” “glitch” pieces of 25 years ago, the errors don’t lie in mangled sound output, but rather in input failure. When the piece functions, it functions around 20% of the time. Sometimes less. On Monday, it was a lot less. Over 15 minutes, not a single command executed.

By placing the instrument as an input to the REPL loop, it queers the acoustic / digital binary and makes total failure audible. If 2000 was post-digital for Cascone, clearly 25 years later, with low bass, we are post again. Indeed, while my computer had no output, the input was constantly present. Although the system uses the logics of live code, especially ixilang, functionally it bears a lot of similarity to responsive systems, such as Voyager by Lewis[5] or Diamond Curtain Wall by Anthony Braxton[6]. Arguably, it’s a simpler system because the results are deterministic – or are when the system works.

Several years ago, I played a free improv set with others at The Luggage Store Gallery in San Francisco. I brought my laptop and my tuba, with the intention to switch between them part way through the set. As we started, my computer would only make static. Something was wildly wrong and after a few minutes of trying to fix it, I switched to tuba for the remainder of the set. Speaking to others afterwards, Matt Davignon said that he thought the static had been on purpose, and he thought I was “one of those computer musicians.” My old double bass teacher, Damon Smith, put it in a positive, enthusiastic light. “It doesn’t matter, because you have a tuba!” He went on “all computer musicians should have tubas with them!”

When I announced I was giving up on Domifare, Evan Rascob, echoing Smith, called out that I should have just played tuba for a few minutes. I should have.

Although it directly contradicts the TopLap Manifesto regarding backups [7], perhaps Smith is right. All computer musicians should have tubas.

Works Cited

[1] K. Cascone, ‘The Aesthetics of Failure: “Post-Digital” Tendencies in Contemporary Computer Music’, Comput. Music J., vol. 24, no. 4, pp. 12–18, Dec. 2000, doi: 10.1162/014892600559489.

[2] J. Vanhanen, ‘Virtual Sound: Examining Glitch and Production’, Contemp. Music Rev., vol. 22, no. 4, pp. 45–52, Dec. 2003, doi: 10.1080/0749446032000156946.

[3] S. Knotts, A. Hamilton, and L. Pearson, ‘Live coding and failure’, Aesthet. Imperfection Music Arts Spontaneity Flaws Unfinished, pp. 189–201, 2020.

[4] G. Lewis, ‘From Network Bands to Ubiquitous Computing: Rich Gold and the Social Aesthetics of Interactivity’, in Improvisation and social aesthetics, G. Born, E. Lewis, and W. Straw, Eds., in Improvisation, community, and social practice. , Durham: Duke University Press, 2017, pp. 91–109.

[5] G. Lewis, Voyager. 1985.

[6] A. Braxton, Diamond Curtain Wall. 2005.

[7] ‘ManifestoDraft – Toplap’. Accessed: Jul. 16, 2025. [Online]. Available: https://toplap.org/wiki/ManifestoDraft

Domifare at Folklore

Even Rascob (aka BITPRINT) has been organising regular Live Code gigs are Folklore in Hackney. I played Domifare last night . . . sort of.

I’ve blogged before about pitch recognition being flaky. And it is, but usually within the first three minutes or so, the SuperCollider autocorrelation UGen does actually recognise the pitches and the piece runs.

Not last night. Instead, I spend 15 minutes playing the same four note phrase over and over and over again, in front of an audience.

What went wrong

  • Normally, when I play this, I have the mic right down in the bell, and it was up slightly higher this time, which may have caused problems.
  • When I practice this, I lip the pitch up or down slightly and this often works. This level of subtlety and control is extremely difficult after several minutes of failure on stage. Instead, my playing got messier and messier over the course of the set.
  • As I was trying to piece out, I couldn’t decide whether to use my old mouth piece, or my new one which is slightly more difficult with greater freedom. It didn’t seem to make a difference when I was practising, so I went for the newer, freer one, which might have been a mistake.
  • My sound card’s output was also extremely low, which is a problem I’ve had before with Pipe Wire. This was concerning during the tech setup, but turned out not to be an issue during the performance.
  • My laptop was sat on a stool in front of me which was not a distance that worked at all with my glasses. The screen was so blurry, I couldn’t properly tell what notes were arriving.

How to fix it

  • If I need consistent mic placement that’s down in the bell, I should make a mount that goes into the bell. The would be a cork-covered ring, with spokes, a mic suspended in the middle.
  • Flucoma would allow me to train a neural net to recognise a series of pitches as a cue. Because the tuba spectrum is weird and the mic is most sensitive at the weird points, I would probably have to do the training on stage. Would his be more tedious of 15minutes of failed command input? No.
  • Practising this piece is essentially training myself to be decipherable to the algorithm, which is subtly different than normal practice goals or technique. I did not get as much practice as I would have liked. I spent a lot of time building lip strength, with the idea it would make my notes clearer, but not as much time getting feedback from the autocorrelation algorithm. It may be more practice with the program would have helped. Or, the algorithm was confused by background noise or mic placement, perhaps it would have made no difference whatsoever.
  • Taking the bus with a tuba, a laptop, an audio card, cables, a mic, a mic stand and so forth is already a bit much, but it may be the case that I also need a laptop stand so I can ensure my computer being at a height and location where I can see it. Or my old reading glasses required more and more distance. Maybe a laptop on a stool is not a good use for them.

How I dealt with everything

I think my stage presence was fine, actually, except for when I was giving up at the end. I should have launched a few minutes of solo improv starting from and around the cue phrase. I’m going to practice this a bit, not that I expect the piece to fail like this again.

This was not my first performance of this piece. It went fine when I played it in Austria, 3 years ago.

Well, at least the failure of that piece was all that went wrong

Shelly Knotts and I were also meant to play some MOO, but discovered during the sound check that most of it wasn’t working, so we cut it from the programme.

Audience Reactions

People were generally positive. Multiple people used the word “futility” but with a positive intention. Which goes to show you can’t trust nerds.

To do

  • Incorporate Flucoma
  • Play this on Serpent because it’s more portable and I really do have more freedom of pitch.
Video by Shelly Knotts

Domifare GUI improvements

A SuperCollider GUI window with buttons across the top, a server meter, a large text area, a list tot he side, an input bar, a bass cleff with nothing following it, four sliders controlling thresholds, two images containing four bass cleff cyctems between them, labelled, "Record Loop", "Stop" "Shake" and "Unshake"
The GUI for Domifare

Domifare is back under development because I will be performing with it on Monday evening in London, at Folklore. https://lu.ma/2rkkzmcz

All the improvements thus far have been to the GUI. It’s come a long way, but there are still some persistent bugs: you must resize the window to get the GUI to layout correctly. The new SuperCollider sclang version is a release candidate right now, so I’m holding off fixing everything, as I hope the many conflicting GUI methods will be better harmonised in that version.

The project is relying on BiLETools for several of the widgets because I was having problems with EZSliders. Again, after the major version update, I plan to remove this dependency.

The Key class in TuningLib is no longer required. It was always overkill, and some of it’s functionality has broken in the last three years.

In general, the pitch tracking is working better than it did three years ago, especially the autocorrelation, although there is still a high error rate. The built in “cheat sheet” makes this much easier to use, although I fear it lets the audience in a bit too much on how simplistic this whole setup is.

The notation is all generated via MuseScore, saved as SVG and then edited in Inkscape. The version of SC on my computer won’t open SVG files (or at least not inkscape SVGs), so these are exported to PNG. The lone F clef in the middle of the image adds notes to the right as it recognises them. The language parser does not track octaves, so it displays noteheads inside the lower part of the staff.

Adding the octave tracking is only partially fiddly, but doing this properly would entail turning the new class CleffView into a proper notation layout class. That has obvious utility, but its ore geometry than I want to get into right now.

I may upgrade the variable list from being a BtListView / EZListView to being a stack of ObjectGUIs for the DomifareLoop class. This would be valuable because I could indicate whether they were playing or were shaken. This could also include their name spelled via CleffViews. Or I could just learn to play directly from solfedge.

Another possible future improvement could be the inclusion of Solresol character glyphs, which would have a fun alien vibe. The problems are both that (last I looked) there is not a nice Linux font that supports these and it would require a time investment to be able to play variables names listed that way.

I’ve also misspelled “clef” as “cleff” but a find and replace is giving me crash errors, so idk. Shrug. Sad face. You can see the latest version on GitHub, although this will move to Codeberg hopefully soon.

There’s a very obvious case for integrating Flucoma into this project to recognise gestures. This would also entail building a training interface. The informational webpage for the SuperCollider release candidate specifically mentions that Flucoma does not work with it, so this is also deferred until that gets fixed. I don’t want to have to hold off upgrading SuperCollider, so I cannot create a situation where upgrading breaks this project.

Some of the London live coders (Lu) have expressed enthusiasm for the idea of doing the training as part of the performance. That has president for cybernetic pieces like Hornpipe by Gordon Mumma and remains a very good idea, but still not for Monday.

If this sounds cool and you can’t come on Monday, maybe you could let me know of another gigging opportunity in your town that I could play at? I’d like to take this show on the road!

Laptop and Tuba

This post is taken from the lightening talk I gave at AMRO

Abstract

I have decided to try to solve a problem that I’m sure we’ve all had – it’s very difficult to play a tuba and program a computer at the same time. A tuba can be played one-handed but the form factor makes typing difficult. Of course, it’s also possible to make a tuba into an augmented instrument, but most players can only really cope with two sensors and it’s hard to attach them without changing the acoustics of the instrument.

The solution to this classic conundrum is to unplug the keyboard and ditch the sensors. Use the tuba itself to input code.

Languages

Constructed languages are human languages that were intentionally invented rather than developing via the normal evolutionary processes. One of the most famous constructed languages is Esperanto, but modern Hebrew is also a conlang. One of the early European conlangs is Solresol, invented in 1827 by François Sudre. This is a “whistling language” in that it’s syllables are all musical pitches. They can be expressed as notes, numbers or via solfèdge.

The “universal languages” of the 19th century were invented to allow different people to speak to each other, but previously to that some philosophers also invented languages to try to remove ambiguity from human speech. These attempts were not successful, but in the 20th century, the need to invent unambiguous language re-emerged in computer languages. Programming languages are based off of human languages. This is most commonly English, although many exceptions exist, including Algol which was always multilingual.

Domifare

I decided to build a programming language out of Solresol, as it’s already highly systematised and has an existing vocabulary I can use. This language, Domifare is a live coding language very strongly influenced by ixi lang, which is also written in SuperCollider. Statements are entered by playing tuba into a microphone. These can create and modify objects, all of which are loops.

Creating an object causes the interpreter to start recording immediately. The recording starts to play back as a loop as soon as the recording is complete. Loops can be started, stopped or “shaken”. The loop object contains a list of note onsets, so when it’s shaken, the notes played are re-ordered randomly. A future version may use the onsets to play synthesised drum sounds for percussion loops.

Pitch Detection

Entering code relies on pitch tracking. This is a notoriously error-prone process. Human voices and brass instruments are especially difficult to track because of the overtone content. That is to say, these sounds are extremely rich and have resonances that can confuse pitch trackers. This is especially complicated for the tuba in the low register because the overtones may be significantly louder than the fundamental frequency. This instrument design is useful for human listeners. Our brains can hear the higher frequencies in the sound and use them to identify the fundamental sound even if it’s absent because it’s obscured by another sound. For example, if a loud train partially obscures a cello sound, a listener can still tell what note was played. This also works if the fundamental frequency is lower than humans can physically hear! There are tubists who can play notes below the range of human hearing, but which people perceive through the overtones! This is fantastic for people, but somewhat challenging for most pitch detection algorithms.

I included two pitch detection algorithms, one of which is a time based system I’ve blogged about previously and the other is one built into SuperCollider using a technique called autocorrelation. Much to my surprise, the autocorrelation was the more reliable, although it still makes mistakes the majority of the time.

Other possibilities for pitch detection might include tightly tuned bandpass filters. This is the technique used by David Behrman for his piece On the Other Ocean, and was suggested by my dad (who I’ve recently learned built electronic musical instruments in 1960s or 70s!!) Experimentation is required to see if this would work.

AI

Another possible technique likely to be more reliable is AI. I anticipate this could potentially correctly identify commands more often than not, which would substantially change the experience of performance. Experimentation is needed to see if this would improve the piece or not. Use of this technique would also require pre-training variable names, so a player would have to draw on a set of pre-existing names rather than deciding names on the fly. However, in performance, I’ve had a hard time deciding on variable names on-the-fly anyway and have ended up with random strings.

Learning to play this piece already involves a neural learning process, but a physical one in my brain, as I practice and internalise the methods of the DomifareLoop class. It’s already a good idea for me to pre-decide some variable names and practice them so I have them ready. My current experience of performance is that I’m surprised when a command is recognised and play something weird for the variable name and am caught unawares again when the loop begins immediately recording. I think this experience would be improved for the performer and the listener with more preparation.

Performance Practice

The theme for AMRO, where this piece premiered was “debug”, so I included both pitch detection algorithms and left space to switch between them and adjust parameters instead of launching with the optimal setup. The performance was in Stadtwerkstadt, which is a clubby space and this nuance didn’t seem to come across. It would probably not be compelling for most audiences.

Audience feedback was entirely positive but this is a very friendly crowd, so negative feedback would not be within the community norms. Constructive criticism also may not be offered.

My plan for this piece is to perform it several more times and then record it as part of an album tentatively titled “Laptop and Tuba” which would come out in 2023 on the Other Minds record label. If you would like to book me, please get in touch. I am hoping that there is a recording of the premiere.

It works!

After many very long days, My project Domifare is working. For me. It won’t work for you because there is a bug in TuningLib. I have raised an issue, which the package maintainer will get to shortly. The package maintainer, who is me will fix it shortly. When I get back from Austria. I need to test my fix properly.

Only a subset of specified commands have been implemented, but I can record a loop and re-order the playback of a loop based on detected onsets. Hypothetically, I can also start and stop loops. In practice, pitch detection is terrible and the language is barely usable. Annoyingly, the utility of it depends on how good my tuba playing sounds.

If I want to use this as an actual tool, the way forward is playing the key phrases in as training data to an AI thing.

While writing this project, I raised three issues with the SuperCollider project over documentation and one issue with the LinuxExternals Quark over Pipewire. That will turn into a merge request. I might update the documentation for it.

If you want to hear this thing in progress, I’ll be using it on Friday. You can turn up in person to Linz, Austria or tune into the live stream. This is part of AMRO, who have a helpful schedule.

I feel like a zombie and will say something more coherent later.

Midosoldo

I put in a bid to play Domifare at AMRO, knowing it was in no state to perform, but also knowing that nothing motivates like a deadline. I thought it was likely to be accepted, so I planned to start working on it during the break between spring and summer terms.

But then I got covid and felt terrible for weeks, but also got brain fog which, to be honest, has not completely dissipated. I mean, it’s hard to tell. How could I possibly have a bassline on my mental state? I do know that my sense of taste is still messed up and if I exercise a lot I feel ill the next day, so let’s say I’m not at 100% mentally. It could be all in my head, but what difference would that make?

I wanted to finish my marking before dedicating all my time to this. I have not finished my marking, but now both are an emergency. Indeed, the list of things I have not done is kilometres long. My tuba needs a service. I haven’t played it for months and lips are completely unfit.

This is an overly-honest research update. The subject line is the Solresol word for “fear.”

This is the state of the language:

Domifare Notes
Domifare Performance Notes

The “language” has always been conceived of as a way of defining loops. So I have some syntax for recording loops as an audio recording or as a series of onsets, the ability to “shake” an onset loop, the ability to schedule shakes, and the ability to start and stop loops. These are all a series of short musical licks I should ideally memorise but at least be able to play without hesitation or split notes.

Meanwhile, the language currently has the ability to read and receive notes, which is necessarily flaky and a scaffold to hand the rest of the operations …. and a GUI to adjust thresholds because that’s necessary while playing… and that’s kind of it.

This coming weekend is a four day one which is actually a disaster because it means I can’t work during it.

Writing out what I actually have to do makes it sound fully achievable, but it will take longer than I think it will. The GUI took all of yesterday. If I spend part of every day programming and part of every day practising, I should get there. Hopefully.

I haven’t bought my train tickets yet, but I really don’t want to drop out.

I’ve got 2 weeks.

Domifare back under active development!

I’m very exciting to be submitting a proposal to do a performance in the tuba-entered live coding language Domifare.

I’ve been wanting to pick this back up for a while and it seems like the main thing that motivates me is a deadline, so now I’ve got a deadline for version 1.0.

The initial specification of the language is quite modest to implement and my teaching term ends next week, so I’m confident this will be playable by the time the gig arrives. It’s always going to be chaotic because tuba pitch tracking, but it will be a joyful chaos!

More here as I get to active developing.

Domifare – still a rough draft

Today’s diff. Everything compiles, but most things aren’t tested!

Today, I wrote the functions for most of the language structures, except the scheduling ones. And the shaking one, which I’ve just realised I’ve forgotten to include! For the variable classes, I am borrowing a lot of code from DubInstrument in AlgoRLib. Probably, this project and that should be folded into one repo or one should directly depend on the other.

For DubInstruments, asking for a random pattern can trigger the creation of a new one, but not here, as all patterns are entered by the performer.

I need some GUI, including sliders for thresholds and a text print out of entered stuff. Ideally, there should also be a record light for when the performer is entering loops.

As far as shaking goes, that’s sort of straight forward for the rhythm lines. For the melody lines, I might be looking at BufferTool. It’s designed to split up spoken text rather than played notes and relies on pauses. Another possibility is to keep onset data for melodic loops and use it to decide where cuts should be. I’ll need another trigger that gets sent by the recording synthdef when it’s gate opens, so I can relate the onset timings to the position in the buffer.

Tomorrow my wife is having a party and I’m doing marking on Monday and Tuesday, so it might be a few days before I can test properly.

Domifare Classes

Key still had some problems with transposition that related to frequency quanitsation, so those are (hopefully?) now sorted. I got rid of the gravity argument for freqToDegree because it doesn’t make sense, imo and calculating it is a tiny bit of a faff.

For Domifare, as with a spoken language, breaks between commands are articulated as pauses, so I’ve added a DetectSilence ugen. The threshold will need to be connected to a fader to actually be useful, as the margin of background noise will vary massively based on environment.

The next step is parsing. It’s been a loooong time since I’ve worried about how to do this… IxiLang uses a switch statement with string matching.

I need to draw out how this is going to work, since the repeat and the chance commands both take commands as arguments.

This might work as a statement data array:

[key, min_args, max_args, [types], function]

Types can be: \var, \number, \operator, \data. If it’s \operator, then the operator received will be the key for another statement, and the parser will listen for that too…. If it’s \data, that means start the fucntion asap….

Also, since variables are actually loop holders, I’m going to need to make a class for them.

My original plan to was to use pitch recognition to enter in midi notes, but that’s not going to work, so some commands are now defunct.


(
var lang, vars, numbers;
vars = (solfasire:nil, solfasisol:nil, soldosifa:nil);
numbers = (redodo: 1, remimi:2, refafa: 3, resolsol: 4, relala: 5, resisi: 6, mimido: 7, mimire:8);
lang = (
larelasi: [\larelasi, 2, 2, [\var, \data], nil], // func adds the name to the var array, runs the recorder
dolamido: [\dolamido, 0, 1, [\var], nil], // func stops names loop or all loops
domilado: [\domilado, 0, 1, [\var], nil], // func resumes named loop or all loops
mifasol: [\mifasol, 0, 1, [\var], nil], // func raises an octave, which is probably impossible
solfami: [\solfami, 0, 1, [\var], nil], // func lowers an octave- also impossible
lamidore: [\lamidore, 2, 2, [\var, \data], nil], // add notes to existing loop
dosolresi: [\dosolresi, 1, 1, [\var], nil], // shake the loop, which is possible with recordings also...
misisifa: [\misisifa, 0, 1, [\var], nil], // next rhythm
fasisimi: [\fasisimi, 0, 1, [\var], nil], //previous rhythm
misoldola: [\misoldola, 0, 1, [\var], nil], //random rhytm
refamido: [\refamido, 0, 0, [], nil], // die
sifala: [\sifala, 2, 2, [\number, \operator], nil], // repeat N times (1x/bar)
larefami: [\larefami, 2, 2, [\number, \operator], nil] // X in 8 chance of doing the command
);

After pondering this for a bit, I decided to write some classes, because that’s how I solve all my problems. I created a github project. This is the state of the sole file today.

Tested with human voice

Testing showed that for human voice, the frequency domain onsets and pitch tracking were more accurate and faster than the time domain, which is good to know.

Once the frequency is detected, it needs to be mapped to a scale degree. I’ve added this functionality to the Tuning Lib quark. While doing this, I could the help file was confusing and badly laid out and some of the names of flags on the quantisations were not helpful, so I fixed the helpfile, documented the new method, renamed some of the flags (the old ones still work). And then I found it wasn’t handling octaves correctly – it assumed the octave ratio is always 2, which is not true for Bohlen Pierce scales, or some scales derived by Dissonance Curve. So this was good because that bug is not fixed after a mere 8 years of lurking there. HOWEVER, the more I think about it, the less I think this belongs in Key….

Pitch detecting is flaky as hell, but onsets are solid, which is going to make the creation of melodic loops difficult, unless they actually just record the tuba and do stuff with it.

This is the code that’s working with my voice:


(

s.waitForBoot({

s.meter;

SynthDef(\domifare_input, { arg gate=0, in=0;

var input, env, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
env = EnvGen.kr(Env.asr, gate, doneAction:2);

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\phase);//odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 2, fft_pitch);

// send onsets
SendTrig.kr(onset, 4, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

k = Key(Scale.major); // A maj
//k.change(6); // C maj - changing to c maj puts degree[0] to 6!

b = [\Do, \Re, \Mi, \Fa, \So, \La, \Si];
(scale:k.scale, note:k.scale.degrees[0]).play;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 2 } {
//value.postln;
//c = k.freqToDegree(value.asFloat).postln;
//b[c.asInt].postln;
b[k.freqToDegree(value.asFloat)].postln;
}
{ id == 4 } { "4 freq dom onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 50, \gate, 1, \thresh, 0.01]);

})
)