Architecture for Server-based Indoor Audio Walks

Use case: Lautlots
People walked around wearing headphones with a mobile phone stuck on top like extra silly cybermen.
they had six rooms, including two with position tracking. They used camera tracking in one room, and bluetooth plus a step counter in the other room.. They had LEDs on the headset for the camera tracking
He is showing a video of the walk.
they used a server/client architecture, so the server knows where everyone is. This is to prevent the guided walk from directing people to sit on each other.
Client asks for messages they want to receive.
He is showing his PD code, which makes me happy I never have to code in PD
this is also at github

Questions

What did users think of this?
Users were very happy and came out smiling.

Communication Control and Stage Sharing in Netowrked Live Coding

Collaborative live coding is more than one performer live coding at the same time, networked or not, he says.
Network music can by synchronus or asynchornos, collocated or remote.
There are many networked live coding environments.
You can add instrumental performers to live code stuff, for example by live-generating notation. Or by having somebody play an electronic instrument that is being modifies on the fly in software.
How can a live coding environment facillitate mixed collaboration? How and what sould people share? Code text? State? Clock.? variables? How to communicate? How do you share control? SO MANY QUESTIONS!!
They have a client/server model where only one machine makes sound. No synchronisation is required. There is only one master state. However, there are risks of collision and conflict and version control.
the editor runs in a web browser because every fucking thing is in a browser now.
Shows variables in a window and a chat window and a big section of the text. shows the live value of all variables in the program state. Can also show the network/live value.
Now showing collusion risk in this. if two coders use the same variable name, this creates a conflict. Alice is corrupting Bob’s code, but maybe Bob is actually corrupting her code. Anyway, every coder has their own name space and can’t access each other’s variables, which seems reductive. Maybe Bob such just be less of a twat. The live variable view shows both Alice’s and Bob’s variables under separate tabs.
His demo says at the top (‘skip this demo is late’
How do people collaborate if they want to mess around with each other’s variables? They can put some variables ina shared name space. click your variables and hit the shared button and woo shared.
How do you share control?
Chat messages show in the mobile instrument screen for the ipad performer. The programmer can submit a function to the performer in such a way so that the performer has agency in deciding when to run the function.
the tool for all of the this is called UrMus

Questions

Would live coders actually be monitoring each other’s variables in a performance?
Of course, this used in general coding, and hand waving

NEXUS UI: simplified expresive mobile development

This is a distributed performance system for the web. started being focussed on the server, but changed toelp with user interface development tools. Anything that uses a browser can use it, but they’re into mobile devices.
They started with things like knobs, sliders and now offer widgets of various sorces. This is slightly gimmicky, but ok.
NexusUI.js allows you to access the interface. The example is very short and has some toys on it.
They’re being very handy-wavy about how and where audio happens. (they say this runs on a refrigerator (with a browser), but the tilt might not be supported in that case)
Audio! You can use Web Audio if you love javascript. Can use AJAX to send it to servers or Node.js rails, whatever. Can also send to libPD on iOS. nx.sendTo(‘node’) for node.js
They are showing a slide of how to get OSC data from the UI object.
This is a great competitor to touchOSC, as far as I can tell form this paper.
However, Nexus is a platform. There is a template for building new interfaces. It’s got nifty core features.
They are showing a demo of a video game for iphone that uses libPD

Now they are testifying as to ease of use. They have made a bunch of Max tutorials for each nexus object. Tutorials on how to set up on a local server. They have a nexusDrop ui interface builder makes it very competitive with touchOSC, but more generally useful. Comes with an included server or something.
NexusUP is a max thingee that will automagically build a nexusUI based on your pre-existing max patch. (whoah)
Free and open source software
Building a bunch of tools for their mobile phone orchestra.

Tactile overlays

laser cut a thingee in the shape of your ui. put it on your ipad and you get a tactile sense of the interface.

Questions

Can they show this on the friday hackathon?
Yes

Making the most of Wifi

‘Wires are not that bad (compared to wireless)’ – Perry R. Cook 2001
Wireless performance is risky, lower, etc than wired, but dancers don’t want to be cabled.
People use bluetooth, ZigBee and wifi. Everything is in the 2.4 gHz ISM band. All of these technologies use the same bands. Bluetooth has 79 narrowband channels. It will always collide, but always find a gap, leading a large variance in latency.
Zigbee has 16 channels, doesn’t hop.
Wifi has 11 channels in the UK. Many of them overlap, but 1, 6, and 11 don’t. It has broad bandwidth. It will swamp out zigbee and bluetooth.
the have seveloped XOSC, which sends OSC over wifi. It hosts ad-hoc networks. The presenter is rubbing a device and a fader is going up and down on a screen. The device is configured via a web browser.
You can further optimise on top of wifi. By using a high gain directional antenna. And by optimising router settings to minimise latency.
Normally, access points are omni directional, which will get signals from audiences, like mobile phone wifi or bluetooth. People’s phones will try to connect with the network. A directional antenna does not include as much of the audience. They tested the antenna patterns of routers. Their custom antenna has three antennas in it, in a line. It is ugly, but solves many problems. the tested results show it’s got very low gain at the rear, partly because it is mounted on a grounded copper plate.
Even commercial routers can have their settings optimised. This is detailed in their paper.
Packet size in routers is optimised for web browsing and is biased towards large packets, which has high latency. Tiny packets have huge throughput in musical applications.
Under ideal conditions, they can get 5ms of latency.
They found that channel 6 does overlap a bit with 1 and 11, so if you have two different devices, but them on the far outside channels.

Questions

UDP vs TCP – have you studied this wrt latency?
No, they only use UDP
How many drop packets do they get when there is interference?
that’s what the graph showed.

To gesture or not? An analysis of technology in nime proceedings 2001-2013

How many papers use the word ‘gesture’?
Gesture can mean many different things. (my battery is dying.)
Gesture is defined as movement of the body in dictionaries. (59 slides, 4 minutes of battery)
Research deifnitions of gesture: communication, control, metaphor (movement of sound or notation).
Who knows what gesture even means??
He downloaded nime papers and ran searches in them. 62% of all nime papers have mentioned gesture. (Only 90% of 2009 papers use the word ‘music’)
Only 32% of SMC papers mention gesture. 17% of ICMC
He checked what words ‘gesture’ came next to – collocation analysis.

NIME papers are good meta-research material
He suggests people define the term when they use it.
Data is available/

Questions

I can’t tell if this question is a joke or not….. oh, no, we’re on semiotics…. Maybe the pairong of the word ‘gesture’ with ‘recognition’ says something fundamental about why we care about gesture.
The word ‘gesture’ goes in and out of fashion.
Maybe ‘movement’ is a more meaningful word sometimes.
how often is gesture defined?
He should have checked that, he says.

Harmonic Motion: a tool kit for processing gestural data for interactive sound

they want to turn movement data into music.
This has come out of a collaboration with a dancer, using kinect. It was an exploration. He added visualisation to his interface. And eventually 240 parameters. The interface ended up taking over compared to the sound design.
They did a user survey to find out what other people were doing. So they wanted to write something that people could use for prototyping, that’s easy, extensible, and re-usable.
They wanted something stable, fast, free and complementary, so you could use your prototype in production. Not GPL, so you can sell stuff.
A patch-based system, because MAX is awesome all of the time.
This system is easily modifiable. He’s making it sound extremely powerful. Paramters are easy to tweak and saved with the patch, because parameters are important.
Has a simple SDK. Save it as a library, so you can run it in your project without the gui. this really does sound very cool.
Still in alpha.http://harmonicmotion.timmb.com

Questions

CNMAT is doing something he should look at, says a CNMAT guy.

Creating music with leap motion and big bang rubette

Leap Motion is cool, he says.
rubato composer is software that allows people to do stuff with music and maths structures and transforms. It’s MAXish, but with maths.
The maths are forms and denotators, which is based on category theory and something about vector spaces. You can define vectors and do map stuff with them. He’s giving some examples, which I’m sure are meaningful to a lot of people in this room. Alas, both the music AND the math terms are outside of my experience. …. Oh no wait, you just define things and make associations between them. …. Or maybe not…..
It sounds complicated, but you can learning while doing it. They want to make it intuitive to enter matrixes via a visual interface, by drawing stuff.
This is built on ontological levels of embodiment. Facts, processes, gestures (and perhaps jargon). Fortunately, he has provided a helpful diagram of triangular planed in different colours, with little hand icons and wavy lines in a different set of colours, all floating in front of a star field.
Now we are looking at graphs that have many colours, which we could interact with.

Leap Motion

A cheap, small device that track hands above the device. More embodied than mouse or multitouch, as it’s in 3d and you cna use all your fingers.

Rubato

Is built in java, as all excellent music software is. You can grab many possible spaces. Here is a straightfoward one in a five dimensional space, which we can draw with a mouse, but sadly, not in five dimensions. Intuitively, his gui plays audio from right to left. The undo interface is actually kind of interesting. This also sends midi….
The demo seems fun.
Now he’s showing a demo of waving his hands over a MIDI piano.

Questions

Is the software available?
Yes, on sourceforge, but that’s crashy. And there will be an andriod version.
Are there strategies to use it without looking at the screen?
That’s what was in the video, apparently.
Can you use all 3 dimensions?
Yes

Triggering Dounds From Discrete Gestures

Studying air drumming
Air instruments, like the theremin need no physical contact. The kinect has expanded this.
Continuous air gestures are like the theremin.
Discrete movements are meant to be triggers.
Air instruments have no tactile feedback, which is hard. They work ok for continuous air gestures, though. Discrete ones work less well.
He asked users to air drum along to a recorded rhythm.
Sensorimotor Synchronization research found that people who tap along to metronomes are ahead of the beat by 100ms.
Recorded motion with sensors on people.
All participants had musical experience, were right handed.
They need to analyze the audio to find drum sounds.
anaylsis looks for ‘sudden change of direction’ in user hand motion.
The have envelope following that is slow and fast and then compare those results. Hit occurs at velocity minimum. (usually)
Acceleration peaks take place before audio events, but very close to it.
Fast notes and slow notes have different means for velocity, but acceleration is unaffected.

Questions

Can this system be used to predict notes to fight latency in the kinect system?
Hopefully
Will result be different if users have drum sticks?
Maybe?

Nime live blog: Conducting aanalysis

They asked people to conduct however they wanted, to build a data set. Focus on the relationship between motion and loudness.
25 subjects conducting along to a recording. Used kinect to sample data. Used libxtract to measure loudness in the recordings.
Users listen to the recording twice and then conduct it 3 times
Got joint descriptors; velocity, acceleration and jerk; distance to torso.
Got general descriptors about quality of motion, maximum hand height.
they looked for descriptors highly correlated to loudness. they found none. some participants said they didn’t follow dynamics. 8 subjects were removed.
Some users used hand height for loudness, others used larger gestures. they separated users into two groups.

They have been able to find tendencies across users. However,a general model may not be the right approach.

Questions

How do they choose users?
People with no musical training were in the group that raised hand height.

Building SuperCollider 3.6 on Raspberry Pi

Raspberry Pi Wheezy ships with SuperCollider, but it ships with an old version that does not have support for Qt graphics. This post is only slightly modified from this (formerly) handy guide for building an unstable snapshot of 3.7 without graphic support. There are a few differences, however to add graphic support and maintain wii support.
This requires the Raspbian operating system, and should work if you get it via NOOBs. I could not get this to fit on a 4 gig SD card.
Note: This whole process takes many hours, but has long stretches where it’s chugging away and you can go work on something else.

Preparation

  1. log in and type sudo raspi-config, select expand file system, set timezone, finish and reboot
  2. sudo apt-get update
  3. sudo apt-get upgrade # this might take a while
  4. sudo apt-get remove supercollider # remove old supercollider
  5. sudo apt-get autoremove
  6. sudo apt-get install cmake libasound2-dev libsamplerate0-dev libsndfile1-dev libavahi-client-dev libicu-dev libreadline-dev libfftw3-dev libxt-dev libcwiid1 libcwiid-dev subversion libqt4-dev libqtwebkit-dev libjack-jackd2-dev
  7. sudo ldconfig

Build SuperCollider

  1. wget http://downloads.sourceforge.net/project/supercollider/Source/3.6/SuperCollider-3.6.6-Source.tar.bz2
  2. tar -xvf SuperCollider-3.6.6-Source.tar.bz2
  3. rm SuperCollider-3.6.6-Source.tar.bz2
  4. cd SuperCollider-Source
  5. mkdir build && cd build
  6. sudo dd if=/dev/zero of=/swapfile bs=1MB count=512 # create a temporary swap file
  7. sudo mkswap /swapfile
  8. sudo swapon /swapfile
  9. CC=”gcc” CXX=”g++” cmake -L -DCMAKE_BUILD_TYPE=”Release” -DBUILD_TESTING=OFF -DSSE=OFF -DSSE2=OFF -DSUPERNOVA=OFF -DNOVA_SIMD=ON -DNATIVE=OFF -DSC_ED=OFF -DSC_EL=OFF -DCMAKE_C_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” -DCMAKE_CXX_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” ..
    # should add ‘-ffast-math -O3’ here but then gcc4.6.3 fails
  10. make # this takes hours
  11. sudo make install
  12. cd ../..
  13. sudo rm -r SuperCollider-Source
  14. sudo swapoff /swapfile
  15. sudo rm /swapfile
  16. sudo ldconfig
  17. echo "export SC_JACK_DEFAULT_INPUTS="system"" >> ~/.bashrc
  18. echo "export SC_JACK_DEFAULT_OUTPUTS="system"" >> ~/.bashrc
  19. sudo reboot

Test SuperCollider

  1. jackd -p32 -dalsa -dhw:0,0 -p1024 -n3 -s & # built-in sound. change to -dhw:1,0 for usb sound card (see more below)
  2. scsynth -u 57110 &
  3. scide
  4. s.boot;
  5. {SinOsc.ar(440)}.play
  6. Control-.

Optional: Low latency, RealTime, USB Soundcard etc

  1. sudo pico /etc/security/limits.conf
  2. and add the following lines somewhere before it says end of file.
  3. @audio – memlock 256000
  4. @audio – rtprio 99
  5. @audio – nice -19
  6. save and exit with ctrl+o, ctrl+x
  7. sudo halt
  8. power off the rpi and insert the sd card in your laptop.
  9. dwc_otg.speed=1 # add the following to beginning of /boot/cmdline.txt (see http://wiki.linuxaudio.org/wiki/raspberrypi under force usb1.1 mode)
  10. eject the sd card and put it back in the rpi, make sure usb soundcard is connected and power on again.
  11. log in with ssh and now you can start jack with a lower blocksize
  12. jackd -p32 -dalsa -dhw:1,0 -p256 -n3 -s & # uses an usb sound card and lower blocksize
  13. continue like in step5.2 above

links:

This post is licensed under the GNU General Public License.