Script to modify wacom tablet settings to reflect a changed screen geometry

Update

This script has now been replaced with a better version: http://www.celesteh.com/blog/2013/05/26/a-better-script-for-wacom-with-screen/. You will want to use the newer script instead.

Why you would need this and how to use it

Ok, let’s say you have a tablet computer and you want to mirror your display to a projector and you want to use your stylus. Your screen geometry may change, so that instead of using your whole 16/9 ratio screen, you’re only using a 4/3 box in the middle. The problem you may run into is that your stylus does not adjust and so where you’re pointing the stylus may differ significantly from where the pointer appears on the screen. This script fixes that.
There are some dependencies. Make sure you have xsetwacom by typing:

which xsetwacom

If it doesn’t give you a path, you need to: sudo apt-get install xsetwacom
and you need wcalc, which is not installed by default. Type:

sudo apt-get install wcalc

The script is below. To save it, copy it to the clipboard starting from just below the ‘Script’ subheader and ending just above the ‘Commentary’ subheader. Then type the following in the terminal:

cd
mkdir bin
cd bin
touch stylus_script.sh
chmod +x stylus_script.sh
gedit stylus_script.sh

Don’t worry if you get an error saying bin already exists. Just carry on.
Gedit should open with a blank file. Paste the script into that file. Before you save and quit, you will need to make a small change to the top of the script with your own default screen geometry. To find this out, unplug all projectors, etc and open your system settings and look at displays. Set your display to however it is normally. The resolution should have two numbers. In my case they are 1366 x 768. The first is the width and the second is the height. Therefore, at the top of my script, I have:

# change these lines to match your normal screen geometry (note: we assume this normally takes up your whole screen)
NormalWidth=1366
NormalHeight=768

If your display is 1280 x 720, then you would change that to:

NormalWidth=1280
NormalHeight=720

Once you have the correct values in there, save it and you don’t need to change it again. You only have to do all of the above just the one time.
You’ll want to run this script when you plug into a projector (after you set the resolution you’re going to use) and then again when you unplug from it. Do this in the terminal by typing:

source stylus_script.sh

Do it in the same terminal both times.(Further optional commentary about that and other issues is at the bottom of the post.) When you run it the second time, after unplugging the projector, it should put your stylus settings back to normal. Your settings should also reset to normal if you log out and log back in again. If you can’t run this twice from the same terminal, or if that doesn’t work, then you will need to log out and back in.

Script

#!/bin/bash

# check the screen geometry

# change these lines to match your normal screen geometry (note: we assume this normally takes up your whole screen)
NormalWidth=1366
NormalHeight=768

# the script


NormalRatio=`wcalc -q ${NormalWidth}/${NormalHeight}`

LINE=`xrandr -q | grep Screen`
WIDTH=`echo ${LINE} | awk '{ print $8 }'`
HEIGHT=`echo ${LINE} | awk '{ print $10 }' | awk -F"," '{ print $1 }'`
RATIO=`wcalc -q ${WIDTH}/${HEIGHT}`


if  [[ ${NormalRatio} != ${RATIO} ]] # screen is not in normal configuration
  then
 if [[ (! $ALREADYRECALIBRATED ) || ($ALREADYRECALIBRATED == 0)]] # we haven't already adjusted
   then
  LINE=`xsetwacom get "Wacom ISDv4 E6 Pen stylus" area`

  export TabTx=`echo ${LINE} | awk '{ print $1 }'`
  export TabTy=`echo ${LINE} | awk '{ print $2 }'`
  export TabBx=`echo ${LINE} | awk '{ print $3 }'`
  export TabBy=`echo ${LINE} | awk '{ print $4 }'`

  OldTabHeight=$((${TabBy} -${TabTy}))
  OldTabWidth=$((${TabBx} -${TabTx}))

  if [[ ${NormalRatio} > ${RATIO} ]] # width has shrunk
    then
   # use old height values
   TYOFFSET=${TabTy};
   BYOFFSET=${TabBy};

   # calculate new width values
   # width = ratio * height
   
   NewTabWidth=`wcalc -q -P0 ${RATIO}*${OldTabHeight}`
   TabDiff=$(( (${OldTabWidth} - ${NewTabWidth}) / 2));

   TXOFFSET=$((${TabTx} + ${TabDiff}));
   BXOFFSET=$((${TabBx}- ${TabDiff}));

    else # height has shrunk

   # use old width values
   TXOFFSET=${TabTx};
   BXOFFSET=${TabBx};

   # calculate new height values
   # height = width / ratio

   NewTabHeight=`walc -q -P0 ${OldTabWidth}/${RATIO}`
   TabDiff= $(( (${OldTabHeight} - ${NewTabHeight}) / 2));
   
   TYOFFSET=$((${TabTy} + ${TabDiff}));
   BYOFFSET=$((${TabBy}- ${TabDiff}));

    fi
  #echo ${TXOFFSET} ${TYOFFSET} ${BXOFFSET} ${BYOFFSET}
  xsetwacom set "Wacom ISDv4 E6 Pen stylus" Area ${TXOFFSET} ${TYOFFSET} ${BXOFFSET} ${BYOFFSET}
  xsetwacom set "Wacom ISDv4 E6 Pen eraser" Area ${TXOFFSET} ${TYOFFSET} ${BXOFFSET} ${BYOFFSET}
  xsetwacom set "Wacom ISDv4 E6 Finger touch" Area ${TXOFFSET} ${TYOFFSET} ${BXOFFSET} ${BYOFFSET}

  export ALREADYRECALIBRATED=1
   fi
 #gnome-control-center wacom
  else
 echo normal resolution
 # check if we've done some past calibration
 if [[ $ALREADYRECALIBRATED  && $ALREADYRECALIBRATED == 1 ]]
   then
  xsetwacom set "Wacom ISDv4 E6 Pen stylus" Area $TabTx $TabTy $TabBx $TabBy
  echo previous calibration restored
  export ALREADYRECALIBRATED=0
   fi
fi

Commentary

If you chose to run this some other way than typing ‘source stylus_script.sh’ in the terminal when plugging in a projector and then running it the same way again in the same terminal when unplugging the projector, it will not be able to restore your normal settings. You can try running the calibration tool to fix your settings or else log out and log back in again. The reason it needs to be run twice from the same terminal is because it stores your default settings in environment variables. There are undoubtedly better ways of doing this, so please leave a comment if you’ve got code that does it better.
If you normally run your screen display in a different ratio than the physical device, I’m not 100% the maths for this script will be correct for you. Let me know in the comments if you need help in this case.
The tablet height as seen by the wacom device is very different than the height in pixels. This makes sense because pixel size can change dramatically but the physical size of screen in use will stay the same. I assume that any change in display size will be letter-boxed either on the top and bottom or on the sides, but will always be centred and will never have blank space on both the sides and the top and bottom at the same time. If you don’t follow these assumptions, this script will require some modifications to work for you.

A useful script

The best way to remember to do something when you’re going to run some important program is to put in the program itself. Or at the very least, put it in a script that you use to invoke the program.
I have a few things I need to remember for the performance I’m preparing for. One has to do with a projector. I’m using a stylus to draw cloud shapes on my screen. And one way I can do this is to mirror my screen to a projector so the audience can see my GUI. However, doing this usually changes the geometry of my laptop screen, so that instead of extending all the way to the edges, there are empty black bars on either side of the used portion of my display. That’s fine, except the stylus doesn’t know and doesn’t adjust. So to reach the far right edge of the drawn portion of the screen, I need to touch the far right edge of the drawn portion, which puts over a centimetre between the stylus tip and the arrow pointer. Suboptimal!
Ideally, I’d like to have any change in screen geometry trigger a script that changes the settings for for the stylus (and I have ideas about how that may or may not work, using upstart, xrandr and xsetwacom), but in the absence of that, I just want to launch a manual calibration program. If I launch the settings panel, there’s a button on that that launches one. So the top part of my script checks if the calibration is different than normal and launches settings if it is.
The next things I need to remember are audio related. I need to kill pulseaudio. If my soundcard (a Fast Track Ultra) is attached, I need to change the amplitude settings internally so it doesn’t send the input straight to the output. Then I need to start jack using it. Or if it’s not attached, I need to start jack using a default device. Then, because it’s useful, I should start up Jack Control, so I can do some routing, should I need it. (Note: in 12.04 if you start qjackctl after starting jackd, it doesn’t work properly. This is fixed by 13.04.) Finally, I should see if SuperCollider is already running and if not, I should start it.
That’s a bit too much to remember for a performance, so I wrote a script. The one thing I need to remember with this script is that if I want to kill jack, it won’t die from Jack Control, so I’ll need to do a kill -9 from the prompt. hopefully, this will not be an issue on stage.
This is my script:

#!/bin/bash


# first check the screen

LINE=`xrandr -q | grep Screen`
WIDTH=`echo ${LINE} | awk '{ print $8 }'`
HEIGHT=`echo ${LINE} | awk '{ print $10 }' | awk -F"," '{ print $1 }'`

if  [[ ${WIDTH} != 1366 || ${HEIGHT} != 768 ]]
  then
 gnome-control-center wacom
  else
 echo normal resolution
fi


# now setup the audio

pulseaudio --kill

# is the ultra attached?
if aplay -l | grep -qi ultra
  then
 echo ultra
 
 #adjust amplitude
 i=0
 j=0
 for i in $(seq 8); do
         for j in $(seq 8); do
                 if [ "$i" != "$j" ]; then
                         amixer -c Ultra set "DIn$i - Out$j" 0% > /dev/null
                 else
                         amixer -c Ultra set "DIn$i - Out$j" 100% > /dev/null
                 fi
                 amixer -c Ultra set "AIn$i - Out$j" 0% > /dev/null
         done
 done

 #for i in $(seq 4); do 
 # amixer -c Ultra set "Effects return $i" 0% > /dev/null 
 #done 

 #start jack
 jackd -d alsa -d hw:Ultra &
  else
 #start jack with default hardware
 jackd -d alsa -d hw:0 &
fi

sleep 2

# jack control
qjackctl &

sleep 1

# is supercollider running?
if ps aux | grep -vi grep | grep -q scide
  then
 echo already running
  else
 scide test.scd &
fi

Compiling SuperCollider on Ubuntu Studio 3.04 beta 2 (and otherwise setting up audio stuff)

The list of required libraries has changed somewhat from different versions. This is what I did:

sudo apt-get install git cmake libsndfile1-dev libfftw3-dev  build-essential  libqt4-dev libqtwebkit-dev libasound2-dev libavahi-client-dev libicu-dev libreadline6-dev libxt-dev pkg-config subversion libcwiid1 libjack-jackd2-dev emacs gnome-alsamixer  libbluetooth-dev libcwiid-dev netatalk

git clone --recursive https://github.com/supercollider/supercollider.git

cd supercollider

mkdir build

cd build

cmake ..

make

If all that worked, then you should install it:

sudo make install

scide

If it starts, you’re all good!
Users may note that this version of Ubuntu Studio can compile in Supernova support, so that’s very exciting.
I’ve gone to a beta version of Ubuntu Studio because Jack was giving me a bit of trouble on my previous install, so we’ll see if this sorts it out.
Note in the apt-get part that emacs is extremely optional and netatalk allows me to mount apple mac file systems that are shared via apple talks, something I need to do with my laptop ensemble, but which not everyone will need. Gone-alsamixer is also optional and is a gnome app. It’s a gui mixer application which lets you set levels on your sound card. Mine was sending the ins straight to the outs, which is not what I wanted, so I could fix it this way or by writing and running a script. Being lazy, I thought the GUI would be a bit easier. There’s also a command line terminal application called alsamixer, if you like that retro 80’s computing feeling.
It can also be handy to sometimes kill pulse audio without it respawning over and over. Fortunately, it’s possible to do this:

sudo gedit /etc/pulse/client.conf

Add in these two lines:

autospawn = no
daemon-binary = /bin/true

I still want pulse to start by default when I login, though, so I’ve set it to start automatically. I found the application called Startup Applications and clicked add. For the name, I put pulseaudio. for the command, I put:

pulseaudio --start

Then I clicked the add button on that dialog screen and it’s added. When I want to kill pulseaudio, I will open a terminal and type:

pulseaudio --kill

and when I want it back again, I’ll type:

pulseaudio --start

(I have not yet had a killing and a restarting throw any of my applications into silent confusion, but I’m sure it will happen at some point.)
There’s more on this

Live code, code based interfaces and live patching – theory and practice

Some theory

Not every use of code interaction on stage is an instance of live coding. When I first started working with SuperCollider a decade ago, I didn’t create GUIs. I started and stopped code on stage. I had comments in the code giving me instructions on how to do this. One piece instructed me to count to ten between evaluating code blocks. Another told me to take a deep breath.
Part of this was because I hadn’t yet learned how to invoke callback methods or use tasks. Some of it was to create a musical timing – a deep breath is not the same as a two second pause. This was undoubtedly using code interactions to make music. But in no sense were these programs examples of live coding. Once a block was started, there was no further intervention possible aside from halting executing or turning down a fader to slowly mute the output. These pieces were live realisations of generative music, which means they have virtually no interactivity once started whether by code or by other means.
There is not a bright line separating code based interfaces from live coding but instead a continuum between pieces like the ones I used to write and blank slate live coding. The more the interactivity of the code, the farther along this continuum something falls. Levels of interaction could be thought to include starting and stopping, changing parameters on the fly, and changing the logic or signal graph on the fly. Changing logic or signal graph would put one closer to blank slate coding than does just changing numbers on something while it plays.
This argument does imply a value judgement about authenticity, however, this is not my purpose. Different types of code interactions are better suited to different circumstance. A piece that is more live coded isn’t necessarily sonically or objectively better. However, this kind of value judgement is useful in applying the metaphor of live coding to other interactions.
I have been pondering for a while whether or not live synthesiser patching is an analogue form of live coding, a question first posed by Julian Rohrhuber (2011) on the live coding email list. On the one hand, the kind of analogue modules used for modular synthesisers were originally developed for analogue computers. The synthesiser itself is a general purpose tool for sound, although, obviously limited to whatever modules are available. (Thus putting it some place between a chainsaw and an idea. (Toplap 2010)) Both computer programs and live patches can quickly grow in complexity to there the performer can no longer comprehend exactly what’s happening. (Collins 2007)
On the other hand, there is no code. However, I’m not sure how much that matters. A PD or MAX patch created on the fly that crates a signal graph is clearly an example of live coding. If for some reason, the patching language had hard limits on what unit generators were available and in what quantity, this would still count. Therefore the transition from virtual to physical seems small. Instead of focussing on the code itself, then, let’s look at the metaphor.
Knob twirling is an example of changing numbers on the fly. Modular synthesisers do contain logic in the forms of gates and switches. This logic and the musical signal routing can be changed on the fly via re-patching. Therefore, a live patching performance that contained all of these elements would be an example of analogue live coding.

Gig Report

I very recently did some live patching in the Live Code Festival in Karlsruhe. Alas, this reasoning of what is or is not live coding did not become clear to me until I was reflecting back on my performance afterwards. This is the first time I was doing patching with any other goals in addition to making nice sounds, which meant I was pushing against places the sounds wanted to settle and I realised on stage I was inadequately prepared. There is both in live coding and live patching a problem of how to prepare for a show, something it has in common with forms of improvised or partially improvised music.
I had a conversation with Scott Wilson about how to practice improvised music that has an agenda. I should have spent the few days before the show building patches that use gates to control timbrel or graph changes. I should have also practised making graph changes in the middle of playing. Instead, I spent the days ahead wrestling with problems with Jack on Linux. I use SuperCollider to manage some panning and recording for me and was having tech problems with it. Mixing analogue and digital systems in this way exposes one to the greater inherent instability of computers. I could make my own stereo autopanner with some envelope followers, a comparator and a panner, so I’ll be looking into putting something together out of guitar pedals or seeing of there is an off-the-shelf solution available.
For this performance, I decided to colour code my cables, following the colour conventions of Sonology in the Hague, so that I would use black for audio, blue for control voltages and red for triggers. This was so users with synthesiser knowledge might be able to at least partly decode my patches. However, this caused a few problems. Normally, I play with cables around my neck and I’ve never before done anything live with cable colours. This time, every time I looked down, I only saw red cables but never actually wanted to use them. For the audience, I tried to make up for the difficulty of seeing a distant synth by using a web cam to project live images of it, but the colour was lost in the low resolution of the web cam. People who tried to just look at the synth directly would have trouble perceiving black cables on a black synth. If I do colour coding again, I need to swap the colours I use and not wear them around my neck. A better webcam might also help.
Aside from the low resolution, the web cam part was successful. I also set the program up so that if I pressed certain buttons on my midi device, slides would be displayed of modules. So when I patched an oscillator, I pushed the oscillator button on the midi control and a labelled picture of an oscillator appeared in the upper left corner. I didn’t always remember to push the buttons, but the audience appreciated the slides and I may extend this in future with more of my modules (I forgot to do the ring mod module) and also extend it to more combinations, so I would have a slide of two oscillators showing FM and one of three showing chaos.
Sonically, the patching seems to have been a success although it was not fun to do because I did have an agenda I was trying to push towards, but had not rehearsed adequately. I want to spend a lot of time now working this out and getting it right and doing another show, but that was it. My next presentation will be all SuperCollider and I need to work on that. I am thinking a lot, though about what I might do for the Other Minds festival next spring. I wonder if live patching would be adequately ambitious for such a high profile gig….

Citations

Collins, Nick. “Live Coding Practice,” 2007. The Proceedings of NIME 2007 [E-JOURNAL]
Available at: <http://www.nime.org/2007/proceedings.php> [Accessed 3 March 2012].
Rohrhuber, Julian. “[livecode] analogue live coding?” 19 February 2011. [Email to livecode list].
Available at: <http://lists.lurk.org/mailman/private/livecode/2011-February/001176.html>
[Accessed: 1 March 2012].
TopLap, “ManifestoDraft.” 14 November 2010. TopLap. [ONLINE] Available at:
<http://toplap.org/index.php/ManifestoDraft> [Accessed: 12 September 2011].

Republic

It’s time for everybody’s favourite collaborative real time network live coding tool for SuperCollider.
Invented by PowerBooks UnPlugged – granular synthesis playing across a bunch of unplugged laptops.
Then some of them started Republic111, which is named for the room number where the workshop where they taught stuff.
Code reading is interesting in network msic partly because of stealing, but also to understand somebody else’s code quickly, or actively understand it by changing it. Live coding is a public or collective thinking action.
If you evaluate code, it shows up in a history file, and gets sent to everybody else in the Republic. You can stop the sound of everybody on the network. All the SynthDefs are saved. People play ‘really equally’ on everybody’s computer. Users don’t feel obligated to act, but rather to respond. Participants spend most of their time listening
Republic is mainly one big class, which is a weakness and should be broken up into smaller classes hat can be used separately. Scott Wilson is working on a newer versions which on github. Look up ‘The Way things May Go on Vimeo’.
Graham and Jonas have done a system which allows you to see a map of who is emitting what sound and you can click on it and get the Tdef that made it.
Scott is putting out a call for participation and discussion about how it should be.

Cyber-physical programming in Extempore – Andrew Sorensen

Cybernetics is a cyber physical system – any closed loop control system.
Cyberphsysical programming is one of these but with a programmer stuck in the loop. The programmer is above the loop and in the loop. Coupling of human, machine and environment.
By changing the world, we understand the world better.
Real-time, real-time. Writing in real-time on a real-time system.
Giant radio telescope is in the works in Australia. they will have huge anoubts of data to deal within real-time.

Extempore

This is a high performance computing thing for real time. It tries to analyse code to figure out how long it’s going to run. Supports embedded computing.
xtlang is a subversion of this? Everything is hot swappable.
It’s a member of the lisp family. Sort of. static typing. No garbage collection. It’s fast and determinate – it will always run at the same speed every time you run it.
This a systems language that feels like a dynamic language. You think you’re doing lisp, but it’s all cyber-physical.

Questions

How old is this language? 2.5 years.
What did the code look like for the example of it’s use in the video he showed? There are online examples.
… this talk was somewhat over my head, as are many of the questions….
Is the compiler written in Extempore? Not yet. It’s in scheme right now.

Benjamin Graf – mblght

Lighting guys sit behind lighting desks and hit buttons for the duration of concerts, so lights in shows are actually usually boring, despite having valuable and variable equipment.
Wouldn’t it be great if you could do stochastic lights with envelope controls?
SuperCollider does solid timing and has support for different methods of dispersing stuff and has flexible signal routing.
He’s got an object that holds descriptions of the capabilities of any lighting fixture – moving, colour, on, off, etc.
He uses events in the pattern system as one way of changing stuff.
He’s added light support to the Server. So you can do SinOsc control of light changes, sendint to contorl busses. He’s also made light UGens.
He ended up live coding the lights for a festival.

Questions

What about machine listening? It would be easy to do in this system.
The code is on github.

David Ogborn: EspGrid

In Canada, laptop orchestras get tones of gigs.
Naive sync methods: do redundant packet transmission – so send the same value several times in a row. This actually increases the chance of collision, but probably one will get through. Or you can schedule further in advance and schedule larger chunks – so send a measure instead of just sending a beat.
download it from esp.mcmasters.ca. mac only
5 design principles

  • Immediacy – launch it and you’ve got stuff going right away
  • Decentralisation – everything is peer to peer
  • Neutrality – works with chuck, supercollider, whatever
  • Hybridity – they can even use different software on the same computer at the same time
  • Extensibility – it can schedule arbitrary stuff

The grid has public and private parts. EspGrid communicates with other apps via localhost OSC. Your copy of supercollider does not talk to the larger network. EspGrid handles all that.
The “private protocol” is not osc. It’s going to use a binary format for transmission. Interoperability is thus only based on client software, not based on the middleware.
Because the Grid thing runs OSC to clients, it can run on a neighbour’s computer and send the osc messages to linux users or other unsupported OSes.
The program is largely meant to be run in the background. You can turn a beat on or off, and this is shared across the network. You can chat. You can share clipboards. Also, Chuck will dump stuff directly.
Arbitrary osc messages will be echoed out, with a time stamp. you can schedule them for the future.
You can publish papers on this stuff or use it to test shit for papers. Like swap sync methods and test which works best.
Reference Beacon does triangulation to figure out latencies.
He wants to add WAN stuff, but not change the UI, so the users won’t notice.

Question

Have they considered client/server topology for time sync? No. A server is a point of failure.
Security implications? He has not considered the possibility of sending naughty messages or how to stop them.
Licence? Some Open Source one… maybe GPL2. It’s on google code.