theory continued.
time point synthesis. Babbit wanted to serialize parameters in addition to pitch. He used durational sets, which becomes dull. And doesn’t transform well.
instead use integers to map to a table of durations. Your grid has 12 durations just cuz. Andrew Mead did some work on this.
there is a class TimePoints. Which is an array.
this a rythm lib. I should look into this.
we’re listening to ‘homily’ by babbitt, which uses these kinds of transformations.
and the code isn’t on the internets.
and now virtual gamelan graz
this is an attempt to model everything about gamelan.
tuning: well, don’t model everything, just the metalaphones. The tuning should be an ideal. This requires fieldwork and interviweing builders. Or you could just measure existing instruments and measure them.
pick one instrument. Measure root pitches. You’re good.
or do more recording like sethares. Measure more ensembles. Which partial is the root?
these guys sampled the local gamelan and went with that.
the tuning . . . Are wesure of the root pitches? Is it the instruments relative to each other, one in reference to itself, the partials in a single note?
there is an image on a grid, which is hard to see as a slide.
you can do a lot of retuning.
sumarsam is raising a point on pelog tuning. The musicologist in the group is absent so the presenters have to defer.
how to synthesize -samples or synthesis. They use sines and formlet filters.
performance modelling. Model human actors or do contextual knowledge.
they did not go with individuals.
They have an event model. Each note is an event, which hold what you need to know.
audio demo. It does tempo changes right. They use ListeningClocks to do time right. I need to look at this class. They follow each other. You can set empathy and confidence, to how much they deviate.