Still liveblogging the sc symposium. This speaker is now my colleague at Anglia Ruskin. He also did a poster presentation on this at AES, iirc
Small devices, easily portable. Appearance and design effect how people interact. Dancers not so different than regular people.
He makes little arduino-powered boxes with proximity detectors. This is not new tech, but is just gaining popularity due to low cost and ease of use.
He’s got a picture up called “gaggle” which has a bunch of ultrasonic sensors. The day before the event at which is was demonstrated, the developers were asked if they wanted to collaborate with dancers. (It’s sort of theremin-esque. There was actually a theremin dance troupe, back in the day and I wonder if their movements looked similar?) The dancers in the video were improvising and not choreographed. They found the device easy to improvise with. Entirely wireless access for them lets them move freely.
How do sounds map to those movement? How nice are the sounds for the interactors (the dancers)?
Now a video of somebody trying the thing out. (I can say from experience that the device is fun to play with).
He’s showing a picture of a larger version that cannot be packed on an airplane and plans to build even bigger versions. HE’s also showing a version with knobs and buttons – and is uncertain whether those features are a good or bad idea.
He also has something where you touch wires, called “wired”. Measures human capacitance. You have to be grounded for it to work. (Is this connected electrically to a laptop?) (He says, “it’s very simple.” and then supercollider crashed at that instant.)
The ultrasound things is called “gaggle” and he’s showing the sc code. The maximum range of the sensor is 3 metres. In the gui he wrote allows for calibration of the device. How far away is the user going to be. How dramatic will the response be to a given amount of movement?
You can use it to trigger a process when something is in range, so it doesn’t need to react dumbly. There is a calibration for “sudden”, which responds to fast, dramatic movements. (This is a really great example of how very much data you can get from a single sensor, using deltas and the like.)
Once you get the delta, average that.
Showing a video of dancers waving around podium things like you see in art museums.
Now a video of contact dancing with the podiums. There’s a guy with a laptop in the corner of the stage. It does seem to work well, although not as musically dramatically when the dancers do normal dancey stuff without waving their arms over the devices, which actually looks oddly worshipful in a worrying way.
Question: do dancers become players, like bassoonists or whatever? He thinks not because the interactivity is somewhat opaque. Also, violinists practice for years to control only a very few parameters, so it would take the dancers a long time to become players. He sees this as empowering dancers to further express themselves.
Dan Stowell wants to know what the presenter was doing on stage behind the dancers? He was altering the parameters with the GUI to calibrate to what the dancers are doing. A later version uses proximity sensors to control the calibration of other proximity sensors, instead of using the mouse.
Question: could calibration be automated? Probably, but it’s hard.