Source code for Christmawave

If you follow my podcast, you’ll note I put out a vaporwave-ish Christmas album, Christmawave. It’s a free download on Bandcamp, but I’m asking those who can afford it to donate to the Hackney Winter Night Shelter.

Almost all of the pieces are constructed using variations on one algorithm. I found hoary, old baby boomer Christmas favourites and then took the instrumental sections, which was sometimes just the intro or the outro. All of the songs were in 4/4 and most of the instrumental parts where cut into either in 2 or 4 bar phrases. This means every sample is divisible by many powers of 2 and can be cut in half several times before it loses musical/rhythmic meaning.

I made these cuts, played the section of the sample with some stuttering and then went on to another section of the sample. This method requires some decision making:

  1. Which sample am I going to play?
  2. How many times am I going to divide it in half?
  3. Once it’s chopped into little (or not-so-little) pieces, which one of them am I going to play?
  4. What speed am I going to play that bit at?
  5. How much should it overlap whatever comes after?
  6. How long should I wait before going to the next thing (which might be a repetition of what I just did)?
  7. How many times should I repeat this thing?

All of the pieces answered these questions in slightly different ways. (Or very different ways, in the case of question 1!) Some of the structuring of how I thought about these questions and how I solved them have to do with how the Pattern library works in SuperCollider.

What sample am I going to play?

In almost every case, I switched samples based on how much time had passed since the start of the piece. I used Ptpars to start different sections.

How many times am I going to divide it in half?

Another way of asking the question is, ‘what power of 2 am I going to use?’ I did this a few different ways. In most cases, I stuffed this into part of the event I called pow. Here are some ways I figured out what power of 2 to use:


\pow, Prand([0, 0, 0, 0, 0, 1, 1], inf)

Then later, on I could go from that to powers of two:


\div, Pfunc({|evt|
2.pow(evt[\pow])
})

(Usually, I would compute the \div in a larger Pfunc that figures out more things.) The advantages of figuring out the power of 2 instead of just having a Prand full of 1, 2, 4, etc are that this is harder to screw up. I don’t need to worry about a stray 3 sneaking in, and, if a sample is longer, I can add some number to the \pow to make the \div bigger.

Another way I computed the \pow was using a Finite State Machine. This was completely overkill, but I’ll walk you through how it worked.

What I wanted was to have a possibility of a \pow being as small as 0 or as big as 8, but not to jump from one of those numbers to the other. Instead, I wanted a route going through intermediate numbers, in which it could potentially get to an 8 and have a path back to 0. I wanted a way for it to wander from one extreme to the other.

A FSM offers a way to give a path. This is what the code looks like from Funky (The Slow Jam):


\pow, Pfsm([
#[0], //start
2, #[3], //0
Prand([0, 0, 1], 1), #[1, 2], //1
Prand([0, 1, 2]), #[1, 2, 3], //2
Prand([1, 2]), #[3, 4], //3
Prand([0, 1, 2]), #[3, 5], //4
Prand([2, 3]), #[4, 3, 5, 6], //5
Prand([3, 4]), #[4, 3, 5, 7], //6
Prand([3, 4, 5]), #[4, 3, 5, 6] //7
], inf),

Pfsm is the pattern library that does state machines. It takes an array. The first item in the array is an array of what states it can start with. Next comes pairs. Each pair is a state. The first pair is state 0. The second pair is state 1. The third pair is state 2, etc. The first item in a pair is the output. In the example above, the output of state 0 is 2 and the output of state 1 is Prand([0, 0, 1], 1).

The second item in the pair is an array of one or more integers. The numbers in the array are the states you can go to next. So with state 0, the array is ‘#[3]’, so it goes on to state 3. When it gets to state 3, it produces the output, which is Prand([1, 2]) and then looks where it can go next, which is ‘#[3, 4]’. That is, it can go state 3 again, or it can go on to state 4.

I could draw a map of this (which would reveal that there is no path to states 1 and 2 – oops).

The FSM described in the code above
In the graph, you can see a lot of arrows pointing up and few pointing down, only along a path that goes through all the states. Thus, it’s relatively unlikely to reach state 7.

Because Pfsm is just another pattern, I could add 1 or 2 to the output of it in the case of a particularly long sample and it would gracefully handle the maths.

Answers to the other questions will be forthcoming in following posts!

The Dead Shopping Mall Project

For various regions, shopping malls across America (and in other places) are dying. Many have only a few shops left in them. A few, somehow, manage to thrive.

Many Gen-Xers spend their formative years in shopping malls, walking endlessly in circles. The experience of these malls was not only a visual panorama of their shiny flat surfaces, but also very much their acoustics. The indoor stores separated themselves from the indoor mall by using piped in music, which informed shoppers that they were entering a different space. The music used was chosen to signal what demographic the store hoped to attract. Stores for teens played the top 40 that appealed to that age group.

Department stores and the mall itself played more neutral music, trying to signal a more broad appeal. All of this was set in space full of talking, foot traffic, and young people, set against very reflective surfaces. It was not just the music chosen, but the acoustic that differentiated spaces.

Smaller stores had nearby walls, dampened with hanging clothes or other wares, and thus their reflections were different than the more open mall.

As malls disappear, their unique sonic environment disappears also. It’s impossible to recapture what they were like when filled with people, unless you somehow entice people to come in. But the echoes of the sonic space and the dimensions of the architecture can be archived in a form that’s usable for people who would like to experience what being in such a place sounded like.

This usable archive can be made via a short audio file called an Impulse Response. It is a recording of the echoes of the space. Below, you can find instructions on one way to make such a recording.

These recordings should be made, as a form of acoustic ecology and of memory. So the sounds can still be used even as the spaces themselves vanish.

How to take an impulse response with two mobile phones.

  1. You will need software for your computer: audacity and fscape
  2. You will need to decide which of the phones has the better microphone. Put a good-quality audio recorder on it. This is your recorder-phone.
  3. Put this audio file on to the other phone. This is your player-phone.
  4. Before you head to the mall (or whenever you switch phones), take a calibration recording.
    1. Play the file from the player-phone while recording it with the recorder-phone. Hold the recorder-phone so it’s microphone is as close as possible to the player-phone’s speaker.
    2. Always record at the best possible settings. Try to use a non-compressed file format like WAV or AIFF.
    3. Make a note of which file is your calibration recording.
  5. Go to the mall and pick out what spaces you would like to take an IR in, then take them.
    1. Decide how far away you want the two phones to be from each other. If the phones are further apart, you get more of the sound of the space. If they’re closer together, the IR will be more intimate. If the spaces is noisy, it might be hard to record at further distances. (If you’re unsure or want options, you can do recordings at multiple distances.)
    2. Play the sound from the player-phone while you are recording with the recorder-phone. Always record at the best possible settings and try to use an uncompressed file format like WAV or AIFF.
    3. Make a note of what mall that you’re at (ie ‘Valley Fair, San Jose, California’), where in the mall you are (ie ‘food court’) and an estimate of the distance (ie ‘2 meters’). Make a note of which recording you’re referring to (either by writing it down or editing the file name).
  6. When you get home, transfer all the recordings to your computer. If you have not already given them descriptive names (ie: ‘ValleyFairFoodCourt2mRec.wav’), do so now.
  7. Open your calibration recording in Audacity, normalise it and reverse it. (These options can be found under the effects menu). Then export it as ‘callibration-rev.wav’. (Export is under the file menu)
  8. Repeat the following process with all of the mall recordings you made:
    1. Open the recording in Audacity, and normalise it. Export it with a descriptive name (ie ‘ValleyFairFoodCourt2mNrml.wav’)
    2. Open Fscape. Under the menu ‘New Module’, select ‘Spectral Domain’ > ‘Convolution’
    3. For the input file, select the mall recording
    4. For the ‘Impulse Response’, put the reversed calibration file, callibration-rev.wav.
    5. Give the output file a descriptive filename (‘ValleyFairFoodCourt2mConv.wav’)
    6. Select Render
    7. Open the output file in Audacity. In the middle of the file, there will be a loud part. Use your mouse to select just the loud part (zoom in to get it as tight as you can. If there’s a bit of lead in and a bit of fade-out, get that too).
    8. Hit the ‘z’ key to slightly adjust your selection
    9. Under the file menu, select ‘Export Selection’. This is the finished impulse response! Give it a descriptive file name (‘ValleyFairFoodCourt2mIR.wav’)
    10. You can erase your source files when you’re done if you want. Be sure not the erase the reversed calibration file until you’ve created all the finished IRs
  9. Now that you’ve got the IRs, please send them to me! Or, better, post them to archive.org and send me the links.
  10. You can use them on recorded audio to make it sound as if it took place in the mall! If you have reverb software, you can use the IR you just made as the IR, or you can use Fscape. Do a convolution with your source file, using the IR you made as the Impulse Response. The output will have the sonic characteristics of the mall!

Teach about trans people

In our current political climate, I think it’s important for teachers and academics in every discipline to take a stand in favour of diversity and inclusion. One important way we can do that is to highlight contributions in our field by members of minority groups. One way into this in any discipline is by including some history. So in computer science, teachers could mention that Alan Turing was gay and that Grace Hopper, inventor of the compiler (and, indeed of the idea of compiling), was a woman.

When teaching music and presenting a piece of music to students, I give a few biographical notes about the composer which are mostly related to their musical background and influences. This is also a good time to mention any minority status. This is important because students will otherwise tend to assume that everyone is a cis, straight, white man. It can seem a bit weird to mention that someone is gay, for example, without other context. There are a few ways to address this.

If a person’s minority status is known to effected their opportunities, then this is is a good way to bring it up. To take an example, Milton Babbit was going to be the first director of The Columbia-Princeton Electronic Music Center. When they realised he was Jewish, they rescinded the offer and hired someone else. After a year, they came back to Babbit and re-offered him the job. It’s good to tell students about this and condemn it, to let them know that discrimination was more recent and widespread than they may have imagined and to give the idea that it was wrong and should be opposed.

Another way to bring up somebody’s status as a minority applies if they were a member of a milieu at least partly defined by minority participation. So, for example, a lot of jazz musicians are black and, indeed, some American forms of free improvisation were called “black music”. In general, mentioning milieus is good because it gives students a sense of larger scenes and places they might do additional research. It also communicates that minority involvement was significant and larger than the few individuals discussed in class.

Otherwise, a way to bring up a person’s membership in minority groups is to just tell students you think it’s important to mention it so they know the field is diverse. This is also good because it demonstrates that inclusion is valuable.

It’s important not to make somebody’s status as a minority the defining thing about them. They’re a topic for the class because they relate to the subject the class is covering, not because they’re a minority. One must strike a balance so as to communicate that minorities have historically been part of a discipline and contributions are important and will continue. Over-emphasising their minority status can backfire and make it seem like they’re being highlighted for being weird and different. I try to bring minority community membership up just once and then not mention it again unless it’s relevant in their work.

With contested identities, such as trans people,talking about their background models how to speak respectfully. It’s important that if a student starts giggling or otherwise treating this as a joke, that they’re told to stop. Here is a guide for how to talk about trans people in the classroom.

  • If the person is not living, you should definitely mention that they were trans.
  • If the person is living, you can only say they were trans if the person has consented to this by being public about their trans status.
  • If a person has transitioned to being a woman, the term to use when talking about them being trans is “trans woman” and the pronoun to use is “she”. If they have transitioned to being a man, the term to use when talking about them being trans is “trans man” and the pronoun to use is “he”. If someone has transitioned to a non-binary gender identity, the term to use when talking about this is “enby” (which is a pronunciation of the initials N.B.) and the pronoun to use is “they”. In every case, if the person has expressed a different label or pronoun, you should follow their preferences.
  • Always use their current pronoun, no matter when in their life you are speaking about them.
  • Do not bring up somebody’s previous name without a good reason. Mention it as little as possible.
  • If any of this makes you feel awkward, practice this part of your classroom presentation on a friend until you feel normal about it.

To give an example of how I might talk about this:

Wendy Carlos has done a lot of work on spatialisation and has some good blog posts about it – I’ve put the links on Moodle. She is an American composer who started out at Columbia-Princeton, but then went in a less experimental/more popular direction. She’s best known for working with Moog synthesiser and worked directly with engineers there to design modules, which she used to do several film sound tracks, including Tron and A Clockwork Orange. She initially made her name with Switched on Bach which was a recording of Bach pieces done on synthesiser. This album was hugely popular, made her famous and made a lot of money. She used some of the proceeds of the album to fund her transition, which she kept secret for nearly a decade- dressing up as a man when she appeared publicly because she feared discrimination. Fortunately, when she finally did disclose in 1979, nothing much bad came of it, but it must have been miserable to spend so many years in (reasonable) fear of a backlash.

The popularity of her work shows a strong popular appetite for new timbres, but in a familiar context, like Bach. We’re going to listen to a piece by her …

When you’re talking about a member of any minority group, it’s best to assume that at least one of your students in a member of that community. the intent is to be respectful and to make that student feel included, while at the same time giving other students the idea that members of this minority groups belong in their field. Never be neutral about discrimination.

It’s impossible to get this right every time. Sometimes talking too much about discrimination can traumatise the students who also experience it, or glossing over it can fail to condemn it forcefully enough. The important thing is to keep trying to include this and to get a feel for the students you’re teaching, as every group and every institution will be different. You may find, for example, that student comments about works by women tend to be more negative than works by men. One way you might address this is to present the works first and ask for comments and only talk about biographies afterwards.

Keep trying things out. We can make a positive difference in our teaching, no matter what our subject it.

Militarised Police

In his drive to undo every single Obama policy, Trump has lifted restrictions on police getting military combat gear. This ban was put in place after images came out of the militarised response to protests in Ferguson. The problem with police having military gear is that they will use it in interactions with civilians.

Without a doubt, this is a national issue. However, it is also a local issue. Every police force has it’s own rules about what it can and can’t purchase. Your city can direct it’s police not to buy military hardware.

Many cities are organised into districts, so that every district elects a council member. I used google and my city’s web pages to find my district and from there, found the phone number for my council member’s office. The following is roughly what I said:

Hello, I am a registered voter in [District X] and I’m calling because Trump has just lifted the ban on police forces acquiring military hardware. I’d like to ask that our city police do not get an military equipment and get rid of any military equipment that they might already have.

I vote in Berkeley, and the person answering the phone had not heard of the new rules and was unhappy to hear of them. She assured me that my council member was in agreement and expressed hope that the whole council would feel similarly. It may seem like it’s unnecessary to make this call in Berkeley, but my concern was that some people might think that military kit would be an appropriate way to respond to the fascist violence that’s been rising in the city. However, I would argue that the danger of fascism is part of why we must ensure our police are de-militarised.

Because local politics are smaller scale, our voices are much more easily heard than they are in national politics. Calling about this issue can help make a difference in your community. Moreover, this does have a national effect. Cities refusing this hardware will help repudiate Trump. And keep our cities safer from police overreaction.

for more about local police reforms and reducing police violence, check out the excellent group Campaign Zero.

Tell congress you’re against nuclear war

Dear [Congress Person],

The constitution states that war can only be declared by congress. Launching a nuclear first strike is certainly an act of war. I urge you and members of congress to make clear your necessary role in declaring war and to remove from the president the ability to make unauthorised first strikes.

Best Regards,

[Your name]

[The address at which you are registered to vote]

You can fax your senators for free: https://faxzero.com/fax_senate.php. You do not need to create an account, but you do need to enter your email address. You should contact both senators.

You can also fax your representative for free: https://faxzero.com/fax_congress.php. You only have one representative. You can find out who they are here: https://www.house.gov/representatives/find/

If you are a US resident who cannot vote, you can, of course, still contact your local senator and representative. If you cannot vote in the US and do not live in the US, you should contact your local government representative and express your concerns. Especially if you government is a NATO member or allied with the US in some other way, this could be helpful.

Update

There’s already a bill in congress on this topic, the Restricting First Use of Nuclear Weapons act of 2017. Here’s a sample phone script if you want to call congress: https://www.wagingpeace.org/sample-phone-script-restricting-first-use-nuclear-weapons-act/

Everybody’s Free to Feel Good: Gay Clubs and Liberation

Listening to club mixes on Gaydio, I was struck by how often the word ‘free’ came up in the music, as a long, held, emphasised word. While some of this is undoubtedly due to the lure of endless granular stretching of the ‘eee’ sound, this is clearly an idea that resonates still within the gay club scene. For example, in Outrage’s 1996 hit, Tall N Handsome a low-pitched voice first says ‘I’m looking for a good man’ and then sings, ‘He’s got to be tall n handsome. and he’s got to be free.’

But what does it mean for the good man to be ‘free’? While an obvious interpretation would be ‘single’, when this song is played in a long set of club mixes, a more clubby interpretation of ‘free’ is suggested.

Although by 1996, gay men in England, where the song was recorded, had more freedom than previously, they still had much less than straight people. The song itself, however, is not a strident call for freedom or action. Freedom is an individual project – the good man must be free as a pre-requisite for the narrator. Again, the precise meaning of this is not specified, but the onus for attaining this freedom is squarely placed on him. Indeed, the vocal tones of the man-seeker suggest a political safeness. The spoken part sounds theatrical and light, as if they were spoken by a dame in a panto. The speaker says ‘Now, I’m looking for a good man, but not just any man. He’s got to be someone special.’ And then, as far as I can tell from the recording, he says, ‘Someone in like Popye.’ The sung part follows.

The version of this song that I heard twice on Gaydio yesterday, mixed by Paul Morell, removes a lot of the comic ambiguity of the original. The original narrator is replaced by Boy George who sounds more typically like someone speaking over a love song, says, ‘Now, I’m looking for a good man, but not just any man. He’s got to be someone special. Someone who can light my fire.’ Of course, it’s possible that the lyrics are actually the same in both versions, but of the two, the newer one is clearly intended in earnest. The update suggests that some took the original song seriously from the 90s.

The repetition of the good man’s requirements, centres the importance of his freedom – a long held word at the end of the chorus. The good man’s individualised freedom, in a 90s context, may refer to a personal authenticity. A free gay man then was free of the closet – at least some of the time in at least some circumstances. This suggests he does not have a woman in his life acting as a ‘beard’ – he’s not married nor in a sham straight relationship. As to his outness more generally, then, as now, people make choices about how gay they feel they can act in various circumstances. I’m reminded of a shop in San Francisco’s Castro District in the 1990s called ‘Does Your Mother Know?’ Like the original song, this uses humour to get at a truth. Many gay people at that time were not out at work and had not told all or sometimes any members of their birth family. The freedom required was likely not this kind of political freedom – instead the good man was somebody who was liberated in certain circumstances.

I remember stickers and chalked slogans (again in California) from the mid 90s which said variations of ‘free your mind and your booty will follow.’ At the time, I took this to mean that free-thinking would lead broader horizons in one’s physical circumstances, but other interpretations are possible. One’s booty is, of course, one’s arse. This can be used to mean one’s whole, grounded physical self (‘get your ass out of here’), or it can refer more specifically to one’s undercarriage (‘shake your booty’). Another interpretation of the slogan is that a free mind will lead to a sexual freeing. It’s likely that the narrator wants somebody who is tall and handsome and who is not overly sexually inhibited.

However, the word ‘free’ is not unique to this song. Someone at a club would hear it several times in several songs over the course of an evening. The type(s) of freedom and paths to freedom may lead towards sexual openness, but that is a destination, not an origin. Indeed, given the distinct lack of personal freedom most gay men experienced most of the time, they could only be free in safe, gay spaces. Freedom thus comes from the club.

This extremely personal sense of release and liberation – a temporary reprieve from a hostile outside, is exhilarating. The club offers a chance to be as gay as one wishes to be, without significant risk. This is in stark contrasts to public outdoor spaces, which were really only safe on Gay Pride Day and sometimes not even then. Freedom in the club and the bedroom gave one enough space to exist and to live. It called for celebration and repetition in song.

However good it felt, though, it contained an inherent contradiction. Someone who was actually free, as this is understood in straight society, would not need to rely on clubs and bars to experience their freedom. Therefore, freedom while it is celebrated is also redefined to fit the circumstances. In 1991, Rozalla released the prototypical anthem to the freedom of the club, Everybody’s Free (to Feel Good). The chorus of the song lingers on a long held ‘free’, resolving with the less prominent but still affirming parenthetical. This is not a freedom of the mind, but a physically embodied sensation of drink, drugs and dance.

The circumstance and ritual of the clubs and bars does not lessen the temporary sensation of freedom, quite the contrary. The transient nature of the experience makes it al the more compelling, as the feeling of camaraderie, community and sexual possibility relies on physical access to the space. For decades, activists complained that most gay people were only interested in this temporary freedom and not doing the work to secure a more enduring political freedom. These spaces, however, provided a launchpad for gay culture, like the charting Tall N Handsome to enter straight spaces and slowly normalise the idea that a man might be looking for a good man who is tall, handsome and free.

Domifare – still a rough draft

Today’s diff. Everything compiles, but most things aren’t tested!

Today, I wrote the functions for most of the language structures, except the scheduling ones. And the shaking one, which I’ve just realised I’ve forgotten to include! For the variable classes, I am borrowing a lot of code from DubInstrument in AlgoRLib. Probably, this project and that should be folded into one repo or one should directly depend on the other.

For DubInstruments, asking for a random pattern can trigger the creation of a new one, but not here, as all patterns are entered by the performer.

I need some GUI, including sliders for thresholds and a text print out of entered stuff. Ideally, there should also be a record light for when the performer is entering loops.

As far as shaking goes, that’s sort of straight forward for the rhythm lines. For the melody lines, I might be looking at BufferTool. It’s designed to split up spoken text rather than played notes and relies on pauses. Another possibility is to keep onset data for melodic loops and use it to decide where cuts should be. I’ll need another trigger that gets sent by the recording synthdef when it’s gate opens, so I can relate the onset timings to the position in the buffer.

Tomorrow my wife is having a party and I’m doing marking on Monday and Tuesday, so it might be a few days before I can test properly.

Domifare Classes

Key still had some problems with transposition that related to frequency quanitsation, so those are (hopefully?) now sorted. I got rid of the gravity argument for freqToDegree because it doesn’t make sense, imo and calculating it is a tiny bit of a faff.

For Domifare, as with a spoken language, breaks between commands are articulated as pauses, so I’ve added a DetectSilence ugen. The threshold will need to be connected to a fader to actually be useful, as the margin of background noise will vary massively based on environment.

The next step is parsing. It’s been a loooong time since I’ve worried about how to do this… IxiLang uses a switch statement with string matching.

I need to draw out how this is going to work, since the repeat and the chance commands both take commands as arguments.

This might work as a statement data array:

[key, min_args, max_args, [types], function]

Types can be: \var, \number, \operator, \data. If it’s \operator, then the operator received will be the key for another statement, and the parser will listen for that too…. If it’s \data, that means start the fucntion asap….

Also, since variables are actually loop holders, I’m going to need to make a class for them.

My original plan to was to use pitch recognition to enter in midi notes, but that’s not going to work, so some commands are now defunct.


(
var lang, vars, numbers;
vars = (solfasire:nil, solfasisol:nil, soldosifa:nil);
numbers = (redodo: 1, remimi:2, refafa: 3, resolsol: 4, relala: 5, resisi: 6, mimido: 7, mimire:8);
lang = (
larelasi: [\larelasi, 2, 2, [\var, \data], nil], // func adds the name to the var array, runs the recorder
dolamido: [\dolamido, 0, 1, [\var], nil], // func stops names loop or all loops
domilado: [\domilado, 0, 1, [\var], nil], // func resumes named loop or all loops
mifasol: [\mifasol, 0, 1, [\var], nil], // func raises an octave, which is probably impossible
solfami: [\solfami, 0, 1, [\var], nil], // func lowers an octave- also impossible
lamidore: [\lamidore, 2, 2, [\var, \data], nil], // add notes to existing loop
dosolresi: [\dosolresi, 1, 1, [\var], nil], // shake the loop, which is possible with recordings also...
misisifa: [\misisifa, 0, 1, [\var], nil], // next rhythm
fasisimi: [\fasisimi, 0, 1, [\var], nil], //previous rhythm
misoldola: [\misoldola, 0, 1, [\var], nil], //random rhytm
refamido: [\refamido, 0, 0, [], nil], // die
sifala: [\sifala, 2, 2, [\number, \operator], nil], // repeat N times (1x/bar)
larefami: [\larefami, 2, 2, [\number, \operator], nil] // X in 8 chance of doing the command
);

After pondering this for a bit, I decided to write some classes, because that’s how I solve all my problems. I created a github project. This is the state of the sole file today.

Tested with human voice

Testing showed that for human voice, the frequency domain onsets and pitch tracking were more accurate and faster than the time domain, which is good to know.

Once the frequency is detected, it needs to be mapped to a scale degree. I’ve added this functionality to the Tuning Lib quark. While doing this, I could the help file was confusing and badly laid out and some of the names of flags on the quantisations were not helpful, so I fixed the helpfile, documented the new method, renamed some of the flags (the old ones still work). And then I found it wasn’t handling octaves correctly – it assumed the octave ratio is always 2, which is not true for Bohlen Pierce scales, or some scales derived by Dissonance Curve. So this was good because that bug is not fixed after a mere 8 years of lurking there. HOWEVER, the more I think about it, the less I think this belongs in Key….

Pitch detecting is flaky as hell, but onsets are solid, which is going to make the creation of melodic loops difficult, unless they actually just record the tuba and do stuff with it.

This is the code that’s working with my voice:


(

s.waitForBoot({

s.meter;

SynthDef(\domifare_input, { arg gate=0, in=0;

var input, env, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
env = EnvGen.kr(Env.asr, gate, doneAction:2);

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\phase);//odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 2, fft_pitch);

// send onsets
SendTrig.kr(onset, 4, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

k = Key(Scale.major); // A maj
//k.change(6); // C maj - changing to c maj puts degree[0] to 6!

b = [\Do, \Re, \Mi, \Fa, \So, \La, \Si];
(scale:k.scale, note:k.scale.degrees[0]).play;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 2 } {
//value.postln;
//c = k.freqToDegree(value.asFloat).postln;
//b[c.asInt].postln;
b[k.freqToDegree(value.asFloat)].postln;
}
{ id == 4 } { "4 freq dom onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 50, \gate, 1, \thresh, 0.01]);

})
)

Domifare input

Entering code requires the ability to determine pitch and entering data requires both pitch and onset. Ergo, we need a synthdef to listen for both things. There is also two ways to determine pitch, one in the time domain and the other in the frequency domain.

The frequency domain, of course, refers to FFT and is probably the best method for instruments like flute. It has a pure tone, where the loudest one is the fundamental. However, brass instruments and the human voice both have formants (loud overtones). In the case of tuba, in low notes, the overtones can be louder than the main pitch. I’ve described time-domain frequency tracking for brass and voice in an old post.

The following is completely untested sample code…. It’s my wife’s birthday and I had to go out before I could try it. It does both time and frequency domain tracking, using the fft code to trigger sending the pitch in both cases. For time domain tracking, it could -and possibly should- use the amplitude follower as a gate/trigger in combination with a frequency change of greater than some threshold. The onset cannot be used as the trigger, as the pitch doesn’t stabilise for some time after the note begins. A good player will get it within two periods, which is still rather a long time in such a low instrument. A less good player will take longer to stabilise on a pitch.

Everything in the code is default values, aside from the RMS window, so some tweaking is probably required. Presumably, every performer of this language would need to make some changes to reflect their instrument and playing technique.


(

s.waitForBoot({

SynthDef(\domifare_input, { arg in=0, out=3, rmswindow = 200;

var rms, xings, input, amp, peaks, sin, time_pitch, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
amp = Amplitude.kr(input);
rms = RunningSum.rms(input, window);
peaks = input - rms;
xings = ZeroCrossing.ar(peaks);
time_pitch = xings * 2;

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 0, time_pitch);
SendTrig.kr(hasfreq, 1, fft_pitch);

// send onsets
SendTrig.kr(onset, 2, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 0 } { "time dom pitch is %".format(value).postln; }
{ id == 1 } { "freq dom pitch is %".format(value).postln; }
{ id == 2 } { "onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 200]);

})
)