He got into computer music in the 80’s and was really in to George Lewis and the League of Automatic Composers. In 1989 he made PITEL machine listening and improvisation. in Max
He collaborated with an artist to make dancing pig-meat sculptures that dance and listen to people. Robots made of pork. they showed this project in food markets. This is horrible and wonderful.
In 1984, his collaborator decided to be in the robot. So they set up a situation where an audience could torture a guy in a robotic exoskeleton. Actuators would poke at him and pull at him. The part attached to the mouth made him suffer a bit. The audience always asked for the mouth interface. This taught him stuff about audience interactive. Lesson 1: It’s not a good idea to let audience torture people.
In 1987-88, they did an opera with robots and another exoskeleton. The system was too big and difficult to control. This looks like it was amazing. He’s describing it as a “terrible experience.”
In 1995, he did a piece for Disk Klavier. He did a real-time feedback system w/ a sequencer, a piano module, and fx box, to an amp, to a mic, to a pitch converter to the sequencer. He did this 4 times to make a 4-track MIDI score. This gave him a great piece that took maybe a half hour to realise. Simple things can have wonderful results. He has a metaphor of a truck vs a moped. A system that knows too much has too much intertia and are difficult to drive. Something smaller, like a moped is more versatile.
In one hour he made a “Lowtech QWERTYCaster” in 1996, which was a keyboard, a mouse and joystick attached together like a demented keytar.
In 1989, he did some real time synthesis controllable via the internet. It was an opera with electronic parts composer ny internet users. Formed a trio around this instrument called the FMOL Trio in 2001-2002 (their recordings are downloadable). He started doing workshops with kids and is showing a video of working with 5 year olds. They all learned the instrument and then did a concert. This is adorable. He put the different sections in different kinds of hats. The concert is full of electronic sounds and kid noises.
He learned that you have to make things that people like.
Then he got a real job in academia.
Why were so many new interfaces being invented, but nobody uses them?
In a traditional instrument, the performer has to do everything. In laptop music, the computer does everything and the performer only changes things. In live computer music, the control is shared between the performer and the instrument.
Visual feedback becomes very important. Laptop musicians care more about the screen than the mouse. This inspired the reactable, which he began in 2003
Goal: maximised bandwidth – get the most from the human and the most easily understandable communication from the computer to the human. He decided to go with a modular approach. Modular, tablebletop system. He wanted to make instruments that were fun to learn rather than hurty.
A round table has a non-leader position. Many people can play at once. You can become highly skilled at it.
When they started conceiving it, they were not thinking about technology. They developed a lot of tehcnologu like ReacTIVision, which is open source.
They posted some videos on youTube and got to be very popular. They started selling tables to museums. People liked it and the tables are not breaking down.
They started a company. Three work for the company and the presenter is still at the uni. They’ve done some mobile iApps.
The team quit going to NIME when the company started. They didn’t have things new to say. Reviewers didn’t think small steps were important.
Instruments need to be able to make bad sounds as well as good ones, or else it is just a toy.
Tag: live-blogging
The Snyderphonics Manta, a Novel USB Touch Controller
What is the Manta?
A USB touch controller for audio and video. Uses HID spec. Does capacitive sensing. 6-8ms latency (w some jitter). Portable and somewhat tough. Bus powered. It’s slightly like a monome….
Design features
It has a fixed layout because it’s a hardware device. It is discrete. 48 hexagonal pads which outputs how much surface area is covered. Slightly less than 8 bit. The sliders at the top have 12 bit resolution and are single touch.
the hexagon grid is inspired by just intonation lattices. Based on Erv Wilson and RH Bosanquet’s papers graphs
If every sensor is a note, you have 6 neighbours
It has LED feedback under the sensors (you can turn this off) inspired by monome.
The touch sensing is inspired by the Buchla 100-series controller.
Has velocity detection. Does this based on two consecutive samples.
Uses
Microtonal keyboard, live processing, etc
Future
Something called the manta mate will allow this to be used to control analog synthesisers
Latency improvement in sensor wireless transmission using IEEE 802.15.4
MO- Interlude Project Motivations
Multipurpose handheld unit w/ RF capabilities with network oriented protocol. With a custom messaging schema to reduce latency in a small size.
He’s showing a video of tiny grabbable objects with accelerometers in them. They have a nice aspect. You could use them like reactable elements that send out data, but the ones he’s showing are way more multipurpose.
The unit can be connected to accessories and is a radio controlled wireless device that can stream sensors and can pre-process their own data to cut down on bandwidth usage. They use Zigbee which is not as fast as wifi but low power.
They use off the shelf modules so they don’t need to mess with radio stuff directly. This does require some middleware. Digitizing is surprisingly slow. So they decided to do an all in one solution, using an embedded modem. This si 54 times faster! Plus it’s generic and scalable.
Given that this is IRCAM, I suspect that it’s expensive.
The accessories of the device declare themselves to the device and contain their own specs
The presenter wants to make this Open Source, but needs to get that through internal IRCAM politics and to “clean the code” which is a process that seems to sometimes drag on for people.
HUDuino
He’s describing it as wireless MIDI. It’s plug and play across many OSes. Large Open Source community. Very usable language. Good platform for prototyping.
Arduinos mostly limited to serial over USB (except the teeny according to the last guy). Students had major software issues. The hardware was easy, but the middleware was a pain in the arse and added a lot of latency. They tried a MIDI shield added on to the Arduino, which was not quite good enough.
The 2010 Arduino had a programmable USB chip, so could use different protocols.
There is a LUFA API to do UDB programming.
This means they could use an Arduino directly as a HID. They also have complete implementation of the MIDI spec.
The arduino still needs to be flashed over serial.
HIDuino is quite good for output, especially musical robotic. This creates standardised interface for robot controlling, though MIDI, which is actually pretty cool.
They are working on USB audio class specification. This will require a chip upgrade, as the current arduino only does 8 bit audio. They want to work on a multichannel audio device.
Fortunately, this guy hates MIDI, so they’re looking at ECM Ethernet Control Mode, which would enable OSC over USB.
this looks promising, especially for projects that don’t require the oomph of a purpose-build computer.
http://code.google.com/p/hiduino/
Live blogging NIME keynote: Adventures in Phy-gital Space (David Rokeby)
He started from a contrarian response to the basic characteristics of the computer. Instead of being logical, he wanted to be intuitive. He wanted things to be bodily engaged. The experience should be intimate.
Put the computer out into physical space. Eneter into a space with the computer.
He did a piece called Reflexions in 1983. He made an 8×8 pixel digital camera, had 30 fps, which was quite good for the time.
He made a piece that required very jerky movement. The computer could understand those movements and he made algorythms based on them. This was not accessible to other people. He internalised the computer’s requirements.
In 1987 he made a piece called Very Nervous System. He found it easier to work with amateur dancers because they don’t have a pre-defined agenda of how they want to move. This lets them find a middle space between them and the computer.
The dancer dances to the sound, and the system responds to the sound. This creates a feedback loop. The real delay is not just the framerate, but the speed of the mind. It reenforces the particular aspects of the person within the system.
The system seemed to anticipate his movements. Because consciousness lags movement by 100 milliseconds. We mentally float behind our actions.
This made him feel like time was sculptable.
He did a piece called Measure in 1991. The only sound source was ticking clock, but it was transformed based on user movement near the clock and the shape of the gallery space. He felt he was literally working with space and time.
He began to think of analysing the camera data as if it were a sound signal. Time domain anaylises could extract interesting information. Responsive sound behaviours could respond to different parts of the movement spectrum. Fast movements were high frequency and applied to one instrument. Mid speed was midrange. Slow was low freq.
With just very blocky 8×8 pixels, he’s got more responsiveness than a kinect seems to have now. Of course, this is on the computer’s terms rather than tracking each finger, etc.
There is no haptic response. This means that if you throw yourself at a virtual drum, your body has to provide counterforce, using isometric muscular tension. The virtual casts a real shadow into the interacting body.
Proprioception: How the body imagines its place within a virtual context.
This sense changes in response to an interface. Virtual spaces create an artificial state of being.
He did a piece called Dark Matter in 2010 using several cameras to track a large space, defining several “interactive zones.” He used his iphone to map a virtual scultpure that people could sonically activate by touching the virtual sculpture. He ran it in pitch dark using IR cameras to track people.
After spending time in the installation, he began to feel a physical imbalance. It felt like he was moving heavy things, but he wasn’t. It can look like having a neurological disorder to an outside observer. The performer performs to the interface, navigating an impenetrable internal space. The audience can see it as an esoteric ritual.
this was a lot like building mirrors. He got kind of tired of doing this.
To what degree should an interface be legible? If people understand that something is interactive, they spend a bunch of time trying to figure out how it works. If they can’t see the interaction, they can more directly engage the work.
The audience has expectations around traditional instruments. New interfaces create problems by removing the context in which performers and audiences communicate.
Does the audience need to know a work is interactive?
Interactivity can bring a performer to a new place, even if the audience doesn’t see the interaction. He gave an example of a threatre company using this for a sound track.
He did a piece from 1993-2000 called Silicon Remembers Carbon. Cameras looking at IR shadows of people. Video is projected onto sand. Sometimes pre-recorded shadows accompanied people. And people walking across the space shadowed the video.
If you project a convincing fake shadow, people will think it’s their shadow, and will follow it if it moves subtly.
He did a piece called Taken in 2002. It shows video of all the people that have been there before. And a camera locks on to a person’s head and shows a projection of just the head, right in the middle.
Designing interfaces now will profoundly effect future quality of life, he concludes.
Questions
External feedback loops can make the invisible visible. It helps us see ourselves.
Grid based laptop orchestras
lorks use orchestral metaphor. Sometimes use real istruments as well. This is a growing art form.
configuration of software for eah laptop is a pain in the arse. Custom code, middleware (chuck, etc) HIDs, system config, etc. This can be a “nightmare.” Painful for audiences to watch. Complex setsups and larger ensembles have more problems.
GRENDL: grid enables deployment for laptop orchestras
these kinds of problems are why grid computing was invented. Rules sharing across multiple computers. The shared computers are called organisations. What if a lork was an organisation?
they didn’t want to make musicans learn new stuff. They wanted grendl to be a librarian, not another source of complexity. It would deliver scored and configurations
it deploys files. It does not get used while playing. Before performance, the scores are put on a master computer which distrubtes to ensemble laptops.
grendl executes scripts on the laptops before each piece. Once the piece finishes, the laptop returns to pre-performance state. The composer writes the scripts for each piece.
grendl is a wrapper for the saga api.
they’re trying to make the compositions more portable with tangible control. They have a human/computer readable card with qr codes. Will be simpler to deploy
they’ve been suing this for a year. It has surpassed expectations. Their todo list needs a server application rather than specifying everything at the command line w a script. They’re going to simplyify this with using osc commands to go from composition to composition.
this makes them rethink how to score for a lork. Including archiving and metadata.
grid systems do not account for latency and timing issues and so it’s role in performance is so far liimitted. They have run a piece from grendl.
how do you recover when things go titsup? How to you debug? Answer: it’s the composer’s problem. Things going wrong means segfaults.
the server version gives better feedback. Each computer will now reportback which step borked.
philosophical: Who owns the instrument? The composer? The player? Their goal is to let composers write at the same sort of level as they would for real orchetras
Live-blogging nime: mobilemuse: integral music control goes mobile
music sensors and emotion
integral musical control involves state and physicl interaction. Performer interacts with other performers, with audience and with the instrument. Sounds from emotional states. Performers normally ingeract by looking and hearing, but these guys have added emotional state.
audiences also communicate in the same way. This guy wants to measure the audience.
temperature, heart rate, respiration eeg and other thing you can’t really attach to an audience.
send measurements to a pattern recognition system. The performer wears sensors.
he’s showing a graph of emotional states where a performer’s state and an audience memberms state track almost exactly.
they actually do attach things to the audience. This turns out to be a pain in the arse. They have now a small sensor thing called a “fuzzball” whnich attaches to a mobile phone.
despite me blogging this from my phone, i find it hugely problematic that this level of technology and economic privilege would be required to even go to a concert….
they monitor lie detector sort of things. The mobile phone demodulates the signals. The phone can plot them. There is a huge mess of liscence issues to connect hardware to the phone, so they encode it to the audio in.
they did a project where a movie’s scenes and order was set by the audience’s state.
the application is open source.