You are here

Music Technology & Special Needs: Part 2

Assist & Adapt By Pete Thomas
Published January 2013

We explore the innovative hardware and software that is helping disabled musicians take their work to the stage.

Freelance adaptive music technology specialist Doug Briggs (above left) and his team.Freelance adaptive music technology specialist Doug Briggs (above left) and his team.

 As we saw in Part 1 last month  /sos/dec12/articles/assistive-tech-1.htm, technology designed for musicians with special needs falls into two categories, albeit with some overlap. On the one hand, we might be dealing with the very basic early stages of learning or therapy: on the other, we could be dealing with an already expert musician, producer or engineer who just needs help accessing software parameters. The overlap arises with technology that is useful across the board, allowing someone with learning difficulties to gain experience at an entry level, yet also helping an experienced professional to do something more quickly in a more intuitive way.

Some mainstream applications, such as Apple's Logic, have the potential to be customised for use by disabled musicians or engineers who are already familiar with music technology. Accessibility can purely be a matter of the user finding the appropriate switches and sensors which suit their abilities, making them output MIDI notes or controller data, and and having Logic's Environment configured to convert this to the appropriate information to control the mixer or virtual instruments. Alternatively, musicians may be able to access conventional DAW software with the aid of overlays, such as The Grid (PC) or Ke:nx (Mac), which were developed to make existing software accessible to disabled computer users. When working with this category of musician, the main objective is to allow them to be as independent as possible, and for them to be able to use industry-standard software and hardware without feeling patronised by the "big red and yellow buttons” Mark Hildred referred to last month.

Doug Briggs, whom we also met last month, has been extending this idea and building switching into real instruments, enabling musicians to connect with the technology in a more creative way rather than just pressing uninspiring switches. For instance, he has adapted a bespoke cornet by making the three valves act as switches. You can see this in the video of a Rawchestra rehearsal at http://mediamusicforum.com/fundraising.html.

Mark Hildred of Apollo Creative.Mark Hildred of Apollo Creative.In learning environments, it is important to know and understand the student. Rather than looking at someone's disabilities, we look at his or her abilities, to find the best solution for what are often unique situations. It may be appropriate to use the "big red and yellow buttons”, but it could also be totally counter-productive, depending on the individual. Likewise, it can be useful to have simple, uncluttered, "intuitive” software that doesn't need a lot of mouse movements or keystrokes to operate — but at the same time it is often more useful to have software that is complex enough to be highly customisable.

In the early stages of learning, or where a musician has profound disabilities or learning problems, it can be difficult or impossible for the musician to perform every aspect of a piece of music — the notes and any additional expressive feature — simultaneously. Performers in this situation often benefit from technology we are already used to in many forms of mainstream performance, where, for instance, a keyboard player or drummer would be triggering samples of previous performances or programmed material. Mark Hildred explains: "The approach which has evolved over the years is that you 'pre-compose' your performance, or some elements of it. You might program all of the notes in the order they need to be, but the performance is adding another element such as tempo or expression, or making decisions about which of two possible loops to trigger at a certain time, so you do get a real-time performance aspect.”

Rawchestra

Adaptive and assistive technology is helping Melland High School pupil Liam Steenson realise his ambition of playing rock guitar. Adaptive and assistive technology is helping Melland High School pupil Liam Steenson realise his ambition of playing rock guitar.

Having learned about the ways in which technology can be developed and adapted for use by disabled musicians, I was keen to see and hear it used in performance. The next part of my journey took me to Melland High School in Manchester where, along with Musical Director Joe Botham, Doug Briggs (who we met last month) was rehearsing Rawchestra for their summer show, a production of The Wall. Melland is a special needs school situated in Gorton Education Village alongside Cedar Mount High School. The proximity of a special needs school with a mainstream school means that the disabled kids are not stuck in a special needs ghetto, but can participate in activities along with the non-disabled children. First of all I spoke to Director of Specialism Julie Barnett, and asked her to what extent she has a strategy as to who should do what in her music classes.

"Sometimes we just let the students 'have a go', and we find some very surprising things, like seeing Mohammed's face suddenly light up when he strums a guitar. We know them all so well, we can tell when they are feeling joy. It might just be a flutter of the eyelid, but we know.”

Putting Talent On Display

One problem that can arise with this sort of performance is getting the audience to appreciate just how much skill and hard work has gone into it. It's easy for the audience to think that they're listening to pre-recorded backing tracks, rather than to a rehearsed and skilled live performance. "Often they don't have a clue about the magnitude of the achievement,” agrees Doug Briggs. "We might have it filmed, so when somebody does a solo, you have the live camera zoomed in on the switch so it becomes obvious what's going on. It's the equivalent of having someone in a big band stand up when they're playing their solo. We have to find a way to replicate that for people who can't move, let alone stand up.”

Mark Hildred: "Or we might have a setup in the foyer that the audience can play around with as they come in, so they can see that something is quite difficult, and then there's more appreciation of the skill and effort that went into the performance.”

I was struck by how true this can be when watching the Paralympic closing ceremony: millions of people worldwide would not have been able to tell what the musicians playing electronic instruments were doing, and how they were contributing to the performance.

The Paralympics also bring home how few opportunities there are for people with disabilities or special needs in the field of music and entertainment. Take Melland High School pupil Liam Steenson, for example. Like many young kids, his one big dream is to be a rock guitarist. In most respects he has what it takes: enthusiasm, dedication and a willingness to learn. The drawback is that he has cerebral palsy, hydrocephalus, epilepsy and is registered blind. He finds it profoundly difficult to move any part of his body, including his fingers. A hardware sensor allows Liam to play by small movements of his elbow, either when he feels is the right time in the music, or when cued. The software triggers guitar licks that fit in with the music the rest of the orchestra are playing.

Technophonia

The Technophonia ensemble brings together skilled users of adaptive and assistive music technology with conventional musicians.The Technophonia ensemble brings together skilled users of adaptive and assistive music technology with conventional musicians.

My next port of call was the South Bank, for a performance by Technophonia. The ensemble was playing Microscopic Dances, a specially commissioned piece by composer Oliver Searle of the Royal Conservatoire of Scotland, for which Drake Music Scotland had received funding from PRS For Music. As at Melland, this performance showed how musicians with disabilities can perform on an equal basis with their peers. I managed to grab a moment with Rick Bamford from Drake Music, another practitioner with a great deal of experience of adapting technology for special needs performances. "There have been many advances recently, but there are still some areas where we struggle,” he admits. "For instance, there are plenty of switches available, as we know, but I always felt it would be extremely useful to have a bite switch. However, there seemed to be nothing readily available for special needs use; presumably due to health and safety, they'd be worried somebody would swallow a switch you put inside your mouth. Then I discovered a bite switch used by skydiving cameramen, who obviously couldn't be clicking away on a conventional camera while using their arms and legs to stabilise themselves or steer through the sky.”

Stephanie Forrest plays the Skoog in a Technophonia performance, following the 'skore' on the music stand at right.Stephanie Forrest plays the Skoog in a Technophonia performance, following the 'skore' on the music stand at right.Photo: Stuart BarrettIn this performance, there were three disabled musicians in the ensemble, showcasing three different technologies and approaches to performance. Stephanie Forrest was playing a Skoog, which was interfaced via the Skoog software to Logic and triggering a clarinet sound in its EXS24 sampler. She was reading a 'skore', which consisted of coloured bars corresponding to the colours of the switches on the Skoog. Anthony was playing a Soundbeam, and was using his hand to move along the sound beam to improvise using a pre-programmed scale which fitted with the appropriate section of the composition. Chris Jacquin was using Brainfingers, triggered by his jaw movements to control the tempo of a sequenced section in Notion 4, a notation program designed from the ground up to be used as a live score playback system. Rather than purely triggering a sequence, Chris had written his own conductor part. His jaw movements did not simply tap the tempo in quarter notes: instead he had programmed rhythms which allowed him a much greater control over the expression and feel of the sequence in real time.

On this occasion, the problem of the audience potentially not realising the enormous achievement of the performers was overcome by the structure of the programme: the piece was actually played twice. After the first recital, composer/conductor Oliver Searle gave a short explanation of what the piece was about — with demonstrations of Soundbeam, Skoog and Brainfingers — and what was going on behind the scenes. This was followed by a second performance, so that rather than watching Anthony apparently waving his arm around or Chris seemingly chewing gum, the audience could appreciate that they were seeing creative musicians playing instruments. This change in perception was reflected in the standing ovation following the second performance.

Further Assistance

Chris Jacquin demonstrates the capabilities of Brainfingers, a system through which minute movements of facial muscles can control anything from music technology to fighter aircraft.Chris Jacquin demonstrates the capabilities of Brainfingers, a system through which minute movements of facial muscles can control anything from music technology to fighter aircraft.Photo: David McNiven, Drake Music Scotland

Talking to both Drake Music Scotland and Gorton, I discovered that funding seems to be on the downturn. Prime Minister David Cameron proudly visited Gorton Education Village in 2011, yet government funding was discontinued a few months later. But there are still ways in which the music technology sector could move things forward even without government funding. For instance, Doug Briggs has highlighted the difficulty of rhythmic playing with single-variable controllers or Soundbeam, and this could be a rich area of research and experimentation, while Mark Hildred says that Apollo are currently looking at some new wireless sensors and plug-ins for iOS devices. Tim Anderson points out that although applications such as E-Scape can do much or all of what you might want from a mainstream sequencer, there is still a psychological barrier. A competent disabled musician would most likely prefer to be using industry-standard technology.

So what can Sound On Sound readers do? Volunteer work is always appreciated in schools and hospitals. Very often, these establishments are donated hardware and software which they don't have the technical knowledge to use adequately. However, volunteering can be a bit of a double-edged sword: an oversupply of voluntary work can mean fewer actual jobs for employed or freelance staff. A good strategy might be to volunteer an evaluation of what needs to be done, and then offer to provide help on a freelance basis.

If you are involved in Music Technology courses, especially where research is going on, think how this research can be applied or adapted to special-needs situations. If you think it could be useful, contact the AMTRG (Adaptive Music Technology Research Group) at Huddersfield University. They are very keen to get ideas and prototypes out of the university setting and to the people who might be able to use them.

Web Links

Drake Music Scotland

www.drakemusicscotland.org

Tim Anderson's articles

https://sites.google.com/site/inclusivemusicescape/home/articles

Rawchestra video

www.youtube.com/watch?v=nF6HQuiWi9E

Pete Thomas' web site, where sales of samples for Logic and Garageband (Sample Aid) have raised over £30,000 for disabled musicians to date:

http://mediamusicforum.com/fundraising.html

Beam Me Up

The hardware that gets used in assistive technology contexts ranges from very simple to very sophisticated. Arduino Boards, as mentioned last month, are especially useful for anyone on a tight budget who doesn't mind getting out a soldering iron. These are very adaptable boards available from electronic outlets such as Maplins, which can be used to make any switch interface with MIDI. MIDICreator is a hardware box widely used to integrate any AAC switch into a MIDI system; it is also available with various MIDICreator sensors and 'theme cards', pre-programmed notes, sounds chords, scales and so on.

EnsemblePress is an example of the type of specialist sensor available. It is based on the keyboard aftertouch technology also used in some drum equipment such as the Drumkat. Changes in pad pressure over a tiny physical range allow a whole series of notes or expression messages to be triggered. This means that somebody who may not be able to move their hand or finger more than a few millimetres can still access quite a considerable range of musical data.

Soundbeam is the grandaddy of adapted switch/sensor technology. It is adapted in that it was originally invented for use with dance, but is ideal for assistive music technology (AMT), and was made for many years by Electronic Music Studios (EMS), builders of the legendary VCS3 and Synthi A synthesizers. Soundbeam is an ultrasonic transducer which looks rather like a microphone: it sends out pockets of ultrasound, then 'listens' for any interruptions in the returning sound and maps the distance of those interruptions to MIDI. Within the hardware, the MIDI can address a series of sounds and different scales, which can be programmed to suit each piece of music. This is an ideal tool for somebody to improvise on a scale using whatever parts of the body they can, or by moving a wheelchair along or in and out of the beam. Other switches can also be plugged in to the Soundbeam's switchbox hardware for more versatility.

Brainfingers is another great example of adaptative technology: a special headband fitted with switches which respond to minute facial movements and convert them to computer key commands and/or MIDI. It is rumoured that the originator, Dr Andrew Junker, conceived the idea wondering if he could control his boat while sitting at the bar. It was subsequently used by the US military to give pilots more ways to control aircraft. Current development of Brainfingers is looking at picking up alpha and beta brainwaves, so that musical events could be triggered just by thinking. Obviously, quite sophisticated gating technology is required to filter out unwanted triggering due to slight accidental movements.

Apollo Ensemble is a completely integrated AMT system developed by Mark Hildred at Apollo Creative.Apollo Ensemble is an integrated hardware and software system, which can make use of third-party switches as well as its own controllers, such as the Dice.Apollo Ensemble is an integrated hardware and software system, which can make use of third-party switches as well as its own controllers, such as the Dice.Assist & Adapt It comprises a wide range of switches and sensors which you can choose to suit people's abilities, plus a wireless PC interface and software which interprets the messages from those switches to convert into sounds (samples, loops and so on) and MIDI, which can be used to trigger other software on the PC, such as Ableton Live or Cubase. The software has a drag-and-drop interface with graphical representations of various stages, which can be connected by the good old 'virtual cable' we are all used to. More complex setups can be linked to lighting, images and other media, making Apollo Ensemble not just a musical instrument, but an integrated staging production tool. As Mark points out, wireless systems can be extremely important in this area, as cables trailing over a stage are particularly hazardous with wheelchairs zooming around. Any AAC switch can be used with the wireless interface, and Apollo have also developed a plug-in to allow the system to use the data from games controllers such as the standard Xbox wireless controller for PC.

Specialist Software

Today's mainstream music software can be too complex to be accessed effectively by severely disabled users, even with the benefit of overlays. Combining overlays with standard software can also create navigation problems, as applications cannot communicate information back to the overlay, such as the 'now' position within a score. The E-Scape sequencing software was developed by Tim Anderson in the early 1990s, in conjunction with the Drake Research Project, to enable people with severe disabilities or injuries to do creative work with music even if they could make only one small movement. Later developments added live performance features. E-Scape was the first sequencer designed specially for disabled musicians, with integrated facilities enabling total control from switches. It provides automated auditioning for editing 'by ear', and guidance through processes, and has enabled many people who could not possibly use standard music software with a switch overlay to learn about and create music without help. E-Scape is a sequencer designed with special needs applications in mind.E-Scape is a sequencer designed with special needs applications in mind.

There are a couple of specific terms within E-Scape to describe how music creation can be made easier for the user. Conducting is the process of stepping through a prepared set of single notes or phrases in order. The more switches a player can access, the more control and expressivity can be accomplished. With one switch, you can play each phrase in order. With two, you can also step backwards freely, allowing improvisation. The second switch could also be used to swap to a different sets of phrases or a different sound, or to play the same notes but louder, or with a key change. With as few as two switches, you effectively can play in a band. Three or four switches bring even more flexibility, allowing access to a range of functions; for instance, a pair of switches could be assigned to stepping through phrases and selecting which to trigger. In E-Scape, Conducting is done in the same screen as composition. Each track can be split into segments and you can move between tracks, allowing a switch user to also construct and edit their own material before performing it.

Scrolling is the E-Scape term for the process of converting bodily movement into performance data, with each body position mapped to a phrase (or note) within the set. Position is detected by a sensor or the sideways movement of a trackball, manipulated by hand or foot, or mounted under the chin. The more phrases in the set, the more accurate the user's positioning has to be. For example, using an Apollo EnsembleTilt sensor, a beginner could use the angle of the head mapped to three bass guitar notes, so that nodding the head up and down will play the riff. In a more complicated setup, a Soundbeam system could be used to detect a performer's position on a 10m stage. A motorised wheelchair could then be used to 'play the space': if 10 notes were mapped, then each metre of movement would play the next note. Skilled wheelchair drivers could also move out of the beam and cut into it again further up, playing individual notes from the set rather than just running up and down a scale. Again, if you can also use one or more switches, you can select which set of phrases to use, introducing the potential for more variety. Note that a 'switch' can also be activated by moving to a specific position within the scrolling range — for example, at the far end of the movement.

Skoog is another integrated system, this time with its own very distinctive controller which talks to the Skoog software.Skoog is an integrated software and hardware system with a distinctive cube-shaped controller.Skoog is an integrated software and hardware system with a distinctive cube-shaped controller.Photo: Simon Meadows / Drake Music Scotland Although not as versatile as Apollo, the Skoog's advantage is in its very attractive look and feel, as well as the simplicity of the internal software, which makes it ideal for use in the classroom, as teachers need no specialist knowledge. The controller is a roughly hand-sized cube, each face having a spongey, coloured button which functions as a momentary switch when pressed up or down, but will also send variable data such as pitch-bend or MIDI continuous controller information when moved side-to-side. Not only does Skoog have a built-in sampler so you can record your own sounds, but the Mac and Windows software can interface with other programs through a virtual MIDI port. You could, for instance, use it with a DAW's virtual instruments if you wanted more options than the built-in sounds. Alternatively, key presses on the Skoog can be assigned to start and stop the transport to play pre-programmed sequences or to change instruments. Because the switches are colour-coded, music can be written out in the form of graphic 'skores' using coloured beams to show the user which button to press and for how long.