Alas, I’ve missed the auction of the head of St Vitalis of Assisi, which I guess is just as well as it was expected to go for at least £700. Still, I kind of feel like my entire life as an RC might have been heading for that purchase. I’ve gone on saint-head related pilgrimages and generally have a fascination with relics….
As I see it, the major problem with having a first class relic like this one is where to put it. St Vitalis is the patron saint of STIs and it doesn’t seem fair to keep such an obviously useful saint to oneself. The owner of the head really ought to build a chapel for it. As I don’t have any kind of space for such a construction, the head would be doubly beyond my means.
Indeed, as I live in a two room flat that’s already a bit overly full of stuff, storing the head until I could build a chapel would present a major problem.
I really don’t want a holy relic on display in my bedroom. A skull of any saint looking down on my bed would be a bit of a mood killer. I can’t decide if this particular saint would be better worse than other saints. On the one hand, he is kind of appropriate, if you don’t mind his dead, judging eye sockets. But on the other hand, do I want to send the message to overnight visitors that supernatural help is required in addition to the normal precautions?
I think he could also be distracting in the living room. Alas, I don’t even have room for him in my living room. It’s already stuffed to the gills with rather too much furniture, two tubas, a bass amp and a synthesiser. I have no idea where I could even find space for a head.
He may have died in 1370, but the kitchen seems unhygenic even for a very old and holy skull. And the bathroom is humid, which might lead to corruption of the sort saints are supposed to be spared. A mouldy relic would not be very nice.
This leaves the toilet, which in some ways is the ideal space. I have unoccupied space on top of the cistern, where he could gaze down upon possibly afflicted areas as guests wee. It also gives the faithful a private place where they can take a moment to determine if the saint’s prayers might be helpful before invoking them, and/or possibly calling their local GUM clinic. On the other hand, it does seem somewhat disrespectful to the saint to perch his head in a loo.
(American readers of the linked BBC article should note that in British English, an “outhouse” is a kind of a shed. In American English, an outhouse is a privy. So moving from an outbuilding to a toilet would be a reduction in his circumstances.)
Alas, I’ve been unable to discover ho bought the head, how much they paid or what their plans are. Do I want to know? I’m not sure.
Author: Charles Céleste Hutchins
Backstage at a BiLE gig
We played yesterday in Wolverhampton and I thought it went rather well. While we’re playing, we have a chat window open, so we can do some communication with each other. This is what went on in chat during our last piece:
Norah> :( Les> reme Les> why is norah sad? Shelly> :(? Norah> someone crashed? Antonio> Antonio crashed Norah> oh :( Shelly> ack Les> bummer jorge> ohh sheeet Antonio> next? Norah> Les note! chris> my wiimote is boken chris> ok ill start Antonio> cool chris> ready? Les> i am now Norah> bang Shelly> huh? firebell starts? jorge> yes chris> im clock jorge> purrfect Les> go? Shelly> ack brb. start without me Antonio> go go go Shelly> bk Shelly> ...test... Norah> hi Les> we need a better beater for that bell Shelly> jorge can i have the spoon? Les> eye contact!! Shelly> chirs can u pass the small bell this way? Les> sounding good, norah Shelly> sounding GREAT! Norah> thanks Antonio> everything is crashing for me :( Shelly> norah, ur patch sounds really coo1 Norah> it's being very magical today! Shelly> GRANULATINGGGGGGGGGGGGGGGGGGGG BILE! Norah> WOW Norah> excellent transition guys Shelly> i dont know what time it is by the way Les> 10 Norah> 10:58 Norah> let's start winding down? Les> 10:15? Norah> 11:17 Les> 10:35 Les> nice Shelly> NIIIIIIIIIIIIIIIIIIIIIIIIIICCCCCCCCCCCCEEEEEEEEEEEEEE!!!!!!!!!!!!!!!!! Antonio> :) Norah> that was super! Antonio> is ther eone more? chris> sh*t that was amazing! jorge> super!! Antonio> !!! Shelly> nope! Antonio> super fun times Shelly> suppersuppersupper Antonio> what's next?
My Journey through NIME creation, research and industry (by Sergi Jordà)
He got into computer music in the 80’s and was really in to George Lewis and the League of Automatic Composers. In 1989 he made PITEL machine listening and improvisation. in Max
He collaborated with an artist to make dancing pig-meat sculptures that dance and listen to people. Robots made of pork. they showed this project in food markets. This is horrible and wonderful.
In 1984, his collaborator decided to be in the robot. So they set up a situation where an audience could torture a guy in a robotic exoskeleton. Actuators would poke at him and pull at him. The part attached to the mouth made him suffer a bit. The audience always asked for the mouth interface. This taught him stuff about audience interactive. Lesson 1: It’s not a good idea to let audience torture people.
In 1987-88, they did an opera with robots and another exoskeleton. The system was too big and difficult to control. This looks like it was amazing. He’s describing it as a “terrible experience.”
In 1995, he did a piece for Disk Klavier. He did a real-time feedback system w/ a sequencer, a piano module, and fx box, to an amp, to a mic, to a pitch converter to the sequencer. He did this 4 times to make a 4-track MIDI score. This gave him a great piece that took maybe a half hour to realise. Simple things can have wonderful results. He has a metaphor of a truck vs a moped. A system that knows too much has too much intertia and are difficult to drive. Something smaller, like a moped is more versatile.
In one hour he made a “Lowtech QWERTYCaster” in 1996, which was a keyboard, a mouse and joystick attached together like a demented keytar.
In 1989, he did some real time synthesis controllable via the internet. It was an opera with electronic parts composer ny internet users. Formed a trio around this instrument called the FMOL Trio in 2001-2002 (their recordings are downloadable). He started doing workshops with kids and is showing a video of working with 5 year olds. They all learned the instrument and then did a concert. This is adorable. He put the different sections in different kinds of hats. The concert is full of electronic sounds and kid noises.
He learned that you have to make things that people like.
Then he got a real job in academia.
Why were so many new interfaces being invented, but nobody uses them?
In a traditional instrument, the performer has to do everything. In laptop music, the computer does everything and the performer only changes things. In live computer music, the control is shared between the performer and the instrument.
Visual feedback becomes very important. Laptop musicians care more about the screen than the mouse. This inspired the reactable, which he began in 2003
Goal: maximised bandwidth – get the most from the human and the most easily understandable communication from the computer to the human. He decided to go with a modular approach. Modular, tablebletop system. He wanted to make instruments that were fun to learn rather than hurty.
A round table has a non-leader position. Many people can play at once. You can become highly skilled at it.
When they started conceiving it, they were not thinking about technology. They developed a lot of tehcnologu like ReacTIVision, which is open source.
They posted some videos on youTube and got to be very popular. They started selling tables to museums. People liked it and the tables are not breaking down.
They started a company. Three work for the company and the presenter is still at the uni. They’ve done some mobile iApps.
The team quit going to NIME when the company started. They didn’t have things new to say. Reviewers didn’t think small steps were important.
Instruments need to be able to make bad sounds as well as good ones, or else it is just a toy.
The Snyderphonics Manta, a Novel USB Touch Controller
What is the Manta?
A USB touch controller for audio and video. Uses HID spec. Does capacitive sensing. 6-8ms latency (w some jitter). Portable and somewhat tough. Bus powered. It’s slightly like a monome….
Design features
It has a fixed layout because it’s a hardware device. It is discrete. 48 hexagonal pads which outputs how much surface area is covered. Slightly less than 8 bit. The sliders at the top have 12 bit resolution and are single touch.
the hexagon grid is inspired by just intonation lattices. Based on Erv Wilson and RH Bosanquet’s papers graphs
If every sensor is a note, you have 6 neighbours
It has LED feedback under the sensors (you can turn this off) inspired by monome.
The touch sensing is inspired by the Buchla 100-series controller.
Has velocity detection. Does this based on two consecutive samples.
Uses
Microtonal keyboard, live processing, etc
Future
Something called the manta mate will allow this to be used to control analog synthesisers
Latency improvement in sensor wireless transmission using IEEE 802.15.4
MO- Interlude Project Motivations
Multipurpose handheld unit w/ RF capabilities with network oriented protocol. With a custom messaging schema to reduce latency in a small size.
He’s showing a video of tiny grabbable objects with accelerometers in them. They have a nice aspect. You could use them like reactable elements that send out data, but the ones he’s showing are way more multipurpose.
The unit can be connected to accessories and is a radio controlled wireless device that can stream sensors and can pre-process their own data to cut down on bandwidth usage. They use Zigbee which is not as fast as wifi but low power.
They use off the shelf modules so they don’t need to mess with radio stuff directly. This does require some middleware. Digitizing is surprisingly slow. So they decided to do an all in one solution, using an embedded modem. This si 54 times faster! Plus it’s generic and scalable.
Given that this is IRCAM, I suspect that it’s expensive.
The accessories of the device declare themselves to the device and contain their own specs
The presenter wants to make this Open Source, but needs to get that through internal IRCAM politics and to “clean the code” which is a process that seems to sometimes drag on for people.
HUDuino
He’s describing it as wireless MIDI. It’s plug and play across many OSes. Large Open Source community. Very usable language. Good platform for prototyping.
Arduinos mostly limited to serial over USB (except the teeny according to the last guy). Students had major software issues. The hardware was easy, but the middleware was a pain in the arse and added a lot of latency. They tried a MIDI shield added on to the Arduino, which was not quite good enough.
The 2010 Arduino had a programmable USB chip, so could use different protocols.
There is a LUFA API to do UDB programming.
This means they could use an Arduino directly as a HID. They also have complete implementation of the MIDI spec.
The arduino still needs to be flashed over serial.
HIDuino is quite good for output, especially musical robotic. This creates standardised interface for robot controlling, though MIDI, which is actually pretty cool.
They are working on USB audio class specification. This will require a chip upgrade, as the current arduino only does 8 bit audio. They want to work on a multichannel audio device.
Fortunately, this guy hates MIDI, so they’re looking at ECM Ethernet Control Mode, which would enable OSC over USB.
this looks promising, especially for projects that don’t require the oomph of a purpose-build computer.
http://code.google.com/p/hiduino/
Live blogging NIME keynote: Adventures in Phy-gital Space (David Rokeby)
He started from a contrarian response to the basic characteristics of the computer. Instead of being logical, he wanted to be intuitive. He wanted things to be bodily engaged. The experience should be intimate.
Put the computer out into physical space. Eneter into a space with the computer.
He did a piece called Reflexions in 1983. He made an 8×8 pixel digital camera, had 30 fps, which was quite good for the time.
He made a piece that required very jerky movement. The computer could understand those movements and he made algorythms based on them. This was not accessible to other people. He internalised the computer’s requirements.
In 1987 he made a piece called Very Nervous System. He found it easier to work with amateur dancers because they don’t have a pre-defined agenda of how they want to move. This lets them find a middle space between them and the computer.
The dancer dances to the sound, and the system responds to the sound. This creates a feedback loop. The real delay is not just the framerate, but the speed of the mind. It reenforces the particular aspects of the person within the system.
The system seemed to anticipate his movements. Because consciousness lags movement by 100 milliseconds. We mentally float behind our actions.
This made him feel like time was sculptable.
He did a piece called Measure in 1991. The only sound source was ticking clock, but it was transformed based on user movement near the clock and the shape of the gallery space. He felt he was literally working with space and time.
He began to think of analysing the camera data as if it were a sound signal. Time domain anaylises could extract interesting information. Responsive sound behaviours could respond to different parts of the movement spectrum. Fast movements were high frequency and applied to one instrument. Mid speed was midrange. Slow was low freq.
With just very blocky 8×8 pixels, he’s got more responsiveness than a kinect seems to have now. Of course, this is on the computer’s terms rather than tracking each finger, etc.
There is no haptic response. This means that if you throw yourself at a virtual drum, your body has to provide counterforce, using isometric muscular tension. The virtual casts a real shadow into the interacting body.
Proprioception: How the body imagines its place within a virtual context.
This sense changes in response to an interface. Virtual spaces create an artificial state of being.
He did a piece called Dark Matter in 2010 using several cameras to track a large space, defining several “interactive zones.” He used his iphone to map a virtual scultpure that people could sonically activate by touching the virtual sculpture. He ran it in pitch dark using IR cameras to track people.
After spending time in the installation, he began to feel a physical imbalance. It felt like he was moving heavy things, but he wasn’t. It can look like having a neurological disorder to an outside observer. The performer performs to the interface, navigating an impenetrable internal space. The audience can see it as an esoteric ritual.
this was a lot like building mirrors. He got kind of tired of doing this.
To what degree should an interface be legible? If people understand that something is interactive, they spend a bunch of time trying to figure out how it works. If they can’t see the interaction, they can more directly engage the work.
The audience has expectations around traditional instruments. New interfaces create problems by removing the context in which performers and audiences communicate.
Does the audience need to know a work is interactive?
Interactivity can bring a performer to a new place, even if the audience doesn’t see the interaction. He gave an example of a threatre company using this for a sound track.
He did a piece from 1993-2000 called Silicon Remembers Carbon. Cameras looking at IR shadows of people. Video is projected onto sand. Sometimes pre-recorded shadows accompanied people. And people walking across the space shadowed the video.
If you project a convincing fake shadow, people will think it’s their shadow, and will follow it if it moves subtly.
He did a piece called Taken in 2002. It shows video of all the people that have been there before. And a camera locks on to a person’s head and shows a projection of just the head, right in the middle.
Designing interfaces now will profoundly effect future quality of life, he concludes.
Questions
External feedback loops can make the invisible visible. It helps us see ourselves.
Grid based laptop orchestras
lorks use orchestral metaphor. Sometimes use real istruments as well. This is a growing art form.
configuration of software for eah laptop is a pain in the arse. Custom code, middleware (chuck, etc) HIDs, system config, etc. This can be a “nightmare.” Painful for audiences to watch. Complex setsups and larger ensembles have more problems.
GRENDL: grid enables deployment for laptop orchestras
these kinds of problems are why grid computing was invented. Rules sharing across multiple computers. The shared computers are called organisations. What if a lork was an organisation?
they didn’t want to make musicans learn new stuff. They wanted grendl to be a librarian, not another source of complexity. It would deliver scored and configurations
it deploys files. It does not get used while playing. Before performance, the scores are put on a master computer which distrubtes to ensemble laptops.
grendl executes scripts on the laptops before each piece. Once the piece finishes, the laptop returns to pre-performance state. The composer writes the scripts for each piece.
grendl is a wrapper for the saga api.
they’re trying to make the compositions more portable with tangible control. They have a human/computer readable card with qr codes. Will be simpler to deploy
they’ve been suing this for a year. It has surpassed expectations. Their todo list needs a server application rather than specifying everything at the command line w a script. They’re going to simplyify this with using osc commands to go from composition to composition.
this makes them rethink how to score for a lork. Including archiving and metadata.
grid systems do not account for latency and timing issues and so it’s role in performance is so far liimitted. They have run a piece from grendl.
how do you recover when things go titsup? How to you debug? Answer: it’s the composer’s problem. Things going wrong means segfaults.
the server version gives better feedback. Each computer will now reportback which step borked.
philosophical: Who owns the instrument? The composer? The player? Their goal is to let composers write at the same sort of level as they would for real orchetras
Live-blogging nime: mobilemuse: integral music control goes mobile
music sensors and emotion
integral musical control involves state and physicl interaction. Performer interacts with other performers, with audience and with the instrument. Sounds from emotional states. Performers normally ingeract by looking and hearing, but these guys have added emotional state.
audiences also communicate in the same way. This guy wants to measure the audience.
temperature, heart rate, respiration eeg and other thing you can’t really attach to an audience.
send measurements to a pattern recognition system. The performer wears sensors.
he’s showing a graph of emotional states where a performer’s state and an audience memberms state track almost exactly.
they actually do attach things to the audience. This turns out to be a pain in the arse. They have now a small sensor thing called a “fuzzball” whnich attaches to a mobile phone.
despite me blogging this from my phone, i find it hugely problematic that this level of technology and economic privilege would be required to even go to a concert….
they monitor lie detector sort of things. The mobile phone demodulates the signals. The phone can plot them. There is a huge mess of liscence issues to connect hardware to the phone, so they encode it to the audio in.
they did a project where a movie’s scenes and order was set by the audience’s state.
the application is open source.
How musicians create augmented musical instruments
augmented instruments are easy for performers of the pre-existing instruments. Musicians themselves have expertise, so let them do design. Come up w a system to let them easily do augmentations.
thr augmentalist was designed collaboratively.
gestures go to instrument, sesnors or both. Sound goes into daw. Processing happens
photo of a slider bar taped to a guitar: quick and easy!
instrument design sessions w 10 pop musicians. Experimenters presented system and updates then a testing session, then instrumentalists played arouns, then instrumentalists made suugestions for changes.
guitarists put tilt measurements on the head. Slider on guitar body. Sensors mapped to typical pedal fx.
an mc stuck thing to a mic and himself. Slider on the mic. Fsr on mic body. Accelerometer on his hand. And movements went to pich shifter.
interesting results: most performers tried to use similar movement, like moving head and body. Only one person kept this. All instruments used tilt.
hundreds of gesture/sound mappings were tried. Most considered successful, but not all were kept. Guitarists tend to develop the same augmentations as each other. But some unusual things were developed also.
musicians start with technology rather than the gesture. Technology is seen as the limitation, so start w it’s limitations.
all musicians believed they could come to master systems.
over time, the musicans make the fx more subtle and musical
can people swap instruments? Yes, they felt each other’s instruments were easy to use.
one guitarist uses the system with his band and they’re gigging w it.
takes up a lot of brain cycles to use extensions. It takes a lot of practice.
every musician had maximum enjoyment at every session.
this kind of user-lead thing can create new avenues for research.