He got into computer music in the 80’s and was really in to George Lewis and the League of Automatic Composers. In 1989 he made PITEL machine listening and improvisation. in Max
He collaborated with an artist to make dancing pig-meat sculptures that dance and listen to people. Robots made of pork. they showed this project in food markets. This is horrible and wonderful.
In 1984, his collaborator decided to be in the robot. So they set up a situation where an audience could torture a guy in a robotic exoskeleton. Actuators would poke at him and pull at him. The part attached to the mouth made him suffer a bit. The audience always asked for the mouth interface. This taught him stuff about audience interactive. Lesson 1: It’s not a good idea to let audience torture people.
In 1987-88, they did an opera with robots and another exoskeleton. The system was too big and difficult to control. This looks like it was amazing. He’s describing it as a “terrible experience.”
In 1995, he did a piece for Disk Klavier. He did a real-time feedback system w/ a sequencer, a piano module, and fx box, to an amp, to a mic, to a pitch converter to the sequencer. He did this 4 times to make a 4-track MIDI score. This gave him a great piece that took maybe a half hour to realise. Simple things can have wonderful results. He has a metaphor of a truck vs a moped. A system that knows too much has too much intertia and are difficult to drive. Something smaller, like a moped is more versatile.
In one hour he made a “Lowtech QWERTYCaster” in 1996, which was a keyboard, a mouse and joystick attached together like a demented keytar.
In 1989, he did some real time synthesis controllable via the internet. It was an opera with electronic parts composer ny internet users. Formed a trio around this instrument called the FMOL Trio in 2001-2002 (their recordings are downloadable). He started doing workshops with kids and is showing a video of working with 5 year olds. They all learned the instrument and then did a concert. This is adorable. He put the different sections in different kinds of hats. The concert is full of electronic sounds and kid noises.
He learned that you have to make things that people like.
Then he got a real job in academia.
Why were so many new interfaces being invented, but nobody uses them?
In a traditional instrument, the performer has to do everything. In laptop music, the computer does everything and the performer only changes things. In live computer music, the control is shared between the performer and the instrument.
Visual feedback becomes very important. Laptop musicians care more about the screen than the mouse. This inspired the reactable, which he began in 2003
Goal: maximised bandwidth – get the most from the human and the most easily understandable communication from the computer to the human. He decided to go with a modular approach. Modular, tablebletop system. He wanted to make instruments that were fun to learn rather than hurty.
A round table has a non-leader position. Many people can play at once. You can become highly skilled at it.
When they started conceiving it, they were not thinking about technology. They developed a lot of tehcnologu like ReacTIVision, which is open source.
They posted some videos on youTube and got to be very popular. They started selling tables to museums. People liked it and the tables are not breaking down.
They started a company. Three work for the company and the presenter is still at the uni. They’ve done some mobile iApps.
The team quit going to NIME when the company started. They didn’t have things new to say. Reviewers didn’t think small steps were important.
Instruments need to be able to make bad sounds as well as good ones, or else it is just a toy.
Tag: live blog
The Snyderphonics Manta, a Novel USB Touch Controller
What is the Manta?
A USB touch controller for audio and video. Uses HID spec. Does capacitive sensing. 6-8ms latency (w some jitter). Portable and somewhat tough. Bus powered. It’s slightly like a monome….
Design features
It has a fixed layout because it’s a hardware device. It is discrete. 48 hexagonal pads which outputs how much surface area is covered. Slightly less than 8 bit. The sliders at the top have 12 bit resolution and are single touch.
the hexagon grid is inspired by just intonation lattices. Based on Erv Wilson and RH Bosanquet’s papers graphs
If every sensor is a note, you have 6 neighbours
It has LED feedback under the sensors (you can turn this off) inspired by monome.
The touch sensing is inspired by the Buchla 100-series controller.
Has velocity detection. Does this based on two consecutive samples.
Uses
Microtonal keyboard, live processing, etc
Future
Something called the manta mate will allow this to be used to control analog synthesisers
Latency improvement in sensor wireless transmission using IEEE 802.15.4
MO- Interlude Project Motivations
Multipurpose handheld unit w/ RF capabilities with network oriented protocol. With a custom messaging schema to reduce latency in a small size.
He’s showing a video of tiny grabbable objects with accelerometers in them. They have a nice aspect. You could use them like reactable elements that send out data, but the ones he’s showing are way more multipurpose.
The unit can be connected to accessories and is a radio controlled wireless device that can stream sensors and can pre-process their own data to cut down on bandwidth usage. They use Zigbee which is not as fast as wifi but low power.
They use off the shelf modules so they don’t need to mess with radio stuff directly. This does require some middleware. Digitizing is surprisingly slow. So they decided to do an all in one solution, using an embedded modem. This si 54 times faster! Plus it’s generic and scalable.
Given that this is IRCAM, I suspect that it’s expensive.
The accessories of the device declare themselves to the device and contain their own specs
The presenter wants to make this Open Source, but needs to get that through internal IRCAM politics and to “clean the code” which is a process that seems to sometimes drag on for people.
HUDuino
He’s describing it as wireless MIDI. It’s plug and play across many OSes. Large Open Source community. Very usable language. Good platform for prototyping.
Arduinos mostly limited to serial over USB (except the teeny according to the last guy). Students had major software issues. The hardware was easy, but the middleware was a pain in the arse and added a lot of latency. They tried a MIDI shield added on to the Arduino, which was not quite good enough.
The 2010 Arduino had a programmable USB chip, so could use different protocols.
There is a LUFA API to do UDB programming.
This means they could use an Arduino directly as a HID. They also have complete implementation of the MIDI spec.
The arduino still needs to be flashed over serial.
HIDuino is quite good for output, especially musical robotic. This creates standardised interface for robot controlling, though MIDI, which is actually pretty cool.
They are working on USB audio class specification. This will require a chip upgrade, as the current arduino only does 8 bit audio. They want to work on a multichannel audio device.
Fortunately, this guy hates MIDI, so they’re looking at ECM Ethernet Control Mode, which would enable OSC over USB.
this looks promising, especially for projects that don’t require the oomph of a purpose-build computer.
http://code.google.com/p/hiduino/
Live blogging NIME keynote: Adventures in Phy-gital Space (David Rokeby)
He started from a contrarian response to the basic characteristics of the computer. Instead of being logical, he wanted to be intuitive. He wanted things to be bodily engaged. The experience should be intimate.
Put the computer out into physical space. Eneter into a space with the computer.
He did a piece called Reflexions in 1983. He made an 8×8 pixel digital camera, had 30 fps, which was quite good for the time.
He made a piece that required very jerky movement. The computer could understand those movements and he made algorythms based on them. This was not accessible to other people. He internalised the computer’s requirements.
In 1987 he made a piece called Very Nervous System. He found it easier to work with amateur dancers because they don’t have a pre-defined agenda of how they want to move. This lets them find a middle space between them and the computer.
The dancer dances to the sound, and the system responds to the sound. This creates a feedback loop. The real delay is not just the framerate, but the speed of the mind. It reenforces the particular aspects of the person within the system.
The system seemed to anticipate his movements. Because consciousness lags movement by 100 milliseconds. We mentally float behind our actions.
This made him feel like time was sculptable.
He did a piece called Measure in 1991. The only sound source was ticking clock, but it was transformed based on user movement near the clock and the shape of the gallery space. He felt he was literally working with space and time.
He began to think of analysing the camera data as if it were a sound signal. Time domain anaylises could extract interesting information. Responsive sound behaviours could respond to different parts of the movement spectrum. Fast movements were high frequency and applied to one instrument. Mid speed was midrange. Slow was low freq.
With just very blocky 8×8 pixels, he’s got more responsiveness than a kinect seems to have now. Of course, this is on the computer’s terms rather than tracking each finger, etc.
There is no haptic response. This means that if you throw yourself at a virtual drum, your body has to provide counterforce, using isometric muscular tension. The virtual casts a real shadow into the interacting body.
Proprioception: How the body imagines its place within a virtual context.
This sense changes in response to an interface. Virtual spaces create an artificial state of being.
He did a piece called Dark Matter in 2010 using several cameras to track a large space, defining several “interactive zones.” He used his iphone to map a virtual scultpure that people could sonically activate by touching the virtual sculpture. He ran it in pitch dark using IR cameras to track people.
After spending time in the installation, he began to feel a physical imbalance. It felt like he was moving heavy things, but he wasn’t. It can look like having a neurological disorder to an outside observer. The performer performs to the interface, navigating an impenetrable internal space. The audience can see it as an esoteric ritual.
this was a lot like building mirrors. He got kind of tired of doing this.
To what degree should an interface be legible? If people understand that something is interactive, they spend a bunch of time trying to figure out how it works. If they can’t see the interaction, they can more directly engage the work.
The audience has expectations around traditional instruments. New interfaces create problems by removing the context in which performers and audiences communicate.
Does the audience need to know a work is interactive?
Interactivity can bring a performer to a new place, even if the audience doesn’t see the interaction. He gave an example of a threatre company using this for a sound track.
He did a piece from 1993-2000 called Silicon Remembers Carbon. Cameras looking at IR shadows of people. Video is projected onto sand. Sometimes pre-recorded shadows accompanied people. And people walking across the space shadowed the video.
If you project a convincing fake shadow, people will think it’s their shadow, and will follow it if it moves subtly.
He did a piece called Taken in 2002. It shows video of all the people that have been there before. And a camera locks on to a person’s head and shows a projection of just the head, right in the middle.
Designing interfaces now will profoundly effect future quality of life, he concludes.
Questions
External feedback loops can make the invisible visible. It helps us see ourselves.
How musicians create augmented musical instruments
augmented instruments are easy for performers of the pre-existing instruments. Musicians themselves have expertise, so let them do design. Come up w a system to let them easily do augmentations.
thr augmentalist was designed collaboratively.
gestures go to instrument, sesnors or both. Sound goes into daw. Processing happens
photo of a slider bar taped to a guitar: quick and easy!
instrument design sessions w 10 pop musicians. Experimenters presented system and updates then a testing session, then instrumentalists played arouns, then instrumentalists made suugestions for changes.
guitarists put tilt measurements on the head. Slider on guitar body. Sensors mapped to typical pedal fx.
an mc stuck thing to a mic and himself. Slider on the mic. Fsr on mic body. Accelerometer on his hand. And movements went to pich shifter.
interesting results: most performers tried to use similar movement, like moving head and body. Only one person kept this. All instruments used tilt.
hundreds of gesture/sound mappings were tried. Most considered successful, but not all were kept. Guitarists tend to develop the same augmentations as each other. But some unusual things were developed also.
musicians start with technology rather than the gesture. Technology is seen as the limitation, so start w it’s limitations.
all musicians believed they could come to master systems.
over time, the musicans make the fx more subtle and musical
can people swap instruments? Yes, they felt each other’s instruments were easy to use.
one guitarist uses the system with his band and they’re gigging w it.
takes up a lot of brain cycles to use extensions. It takes a lot of practice.
every musician had maximum enjoyment at every session.
this kind of user-lead thing can create new avenues for research.
Listening to your brain
multimosal interfaces for musical collaboration
physiogical sensors sense brainwaves, heart rate and skin response. Can use eeg, etc. Sensing systems are non invasive, wearable, portable and stream signals wirelessly.
they want to add these to the reactable.
emitters were able to tell it was them that were emitting
Live blogging nime – ircam assigning gesture to sound
they play a sound and then ask people to represent the sound as gesture and then use that gesture to control a new soud. The sound to gesture is an experimental study, which was a very good idea!
in the existing literature: tapping a beat is a well – known gesture. Body motion to music and more new: mimic instrumental performances (ex air guitar). Sound tracting is sketching a sound?
Gaver says musicl listing has a focus on acoustic properties and everyday listening focuses on cause.
categorisation of sounds involves the sound sources. People would categorise door sounds togther, even if they are very different sonically.
will subjects try to mimic the origins of causal sounds, eg mime slamming a door?
will they trace non causal sounds?
they played kitchen sounds and kitchen sounds convoluted w white noise
track subject’s hand position. Each subject gets some time to work out and practice her gesture, then record it three times. Ask the subject to watch the viseo and narrate it.
the transformed sounds are described more metaphorically. Non transformed sounds describe the object and the action is described, rather than the sound.
Live blogging nime papers – gamelan elektrika
a midi gamelan at mit
the slide shows the url Supercollider.ch
they wanted flixible tuning for the gamelan. Evan ziporyn worked on this project. They got alex rigopolus something about touring with kronos. This is going on too long about name dropping and not enough about how it works. I’m not even yet sure what the heck this is, but media lab sure is cool.
this must be a really long time slot becuase she has not yet talked about a technical issue yet.
low latency is important. I think she accidentally let slip that this uses ableton, thus revealing a technical issue.
mit people are often very pleased with themselves.
urathane bars with piezo sensors for version 1, so they switched to FSR and a different material. Didn’t catch how it senses damping.
reong has 5 sensors per pot anf FSRs for damping. Hit the middle, touch something to damp.
gongs have capacitive disks on side, piezo on the other, to sence strikes and damping
supercollider and ableton on the backend to handle tuning and sample pplaying
samsung laughes at them when approached for sponsorship but they sure showed them! Nime is thus invited to share in gloating of the amazing skillz of mit.
instrument demo fail.
question about why not use normal percussion controller? answer: to play like a regular gamelan.
the monitoring situation is a problem for them. They use 4 speakers to monitor. House speaker to play
Live bloggig NIME papers – electromagnetically sustained rhodes piano
Make a rhodes piano that doesn’t decay. Can start from a strike or from the excitation. The examples sound like an ebow made for the rhodes.
Rhodes has enharmonic overtones, especially on the strike. The pickup gets mostly integer multiples of the fundamental, especially even partials.
the actuator is an electromechanical coil driven by a sinetone generator of the fundamental. The pickup also grabs the actuator, so they remove the phase inverse of it past the pickup. They can also use feedback to drive the tine. This causes out of control feedback, so they sense the output just of the actuator and subtract that out, leaving just the tine, thus getting increasingly rube goldberg.
there are pros and cons of each approach. They measure after touch for control using a pressure sensor below each key. Appropriately, all the dignal processing is analog