Everyman at the National Theatre (photo: Richard Hubert Smith)
UK - What colour is the voice of a deity? This was the synesthetic question faced by the design team at London's National Theatre in their production of Everyman, a new adaptation of the classic 15th Century morality play.

This isn't your medieval ancestors' staging of Everyman. Carol Ann Duffy's adaptation recasts the play as a critique of the prodigal present, as a modern Everyman (Chiwetel Ejiofor) is forced to confront his solipsism, materialism and wasteful ways in the face of imminent death. The National's production, directed by Rufus Norris with lighting design by Paul Anderson, set design by Ian MacNeil, video design by Tal Rosner and sound design by Paul Arditti, uses a synthesis of up to the moment technology to present its new interpretation.

At the beginning of the play, God - who takes the form of a cleaning woman - becomes fed up with the self centred wastefulness of humanity after a run in with a drunken, revelling Everyman. To convey the divine omnipotence of this seemingly unassuming character, the creative team decided that parameters of the animated video projections should respond to the sound of the performer's voice.

"The pitch of the voice might change, say the colour of the video, or the volume might change how fast it was moving," explains Dan Murfin, the lighting control supervisor at The National. "Every time they said a word, you would get a flash of video and change colour in time with the speech. So that was the start of it, and then we thought - well how do we actually do that?"

The National is a beta test site for ETC's Eos software, and Murfin was able to use features in the new v2.3 software and an Eos Ti desk to solve the team's interdisciplinary dilemma. The software update enables data to be sent and received via OSC (Open Sound Control), a networking protocol developed in the early 2000s for electronic music performance. The protocol is increasingly used across show control disciplines, because - unlike MIDI - it allows for bidirectional communication and control between devices.

"MIDI is quite a blunt instrument," remarks Murfin. "All you're able to do is to say 'go now'. Whereas the advantage of OSC is that it can get feedback. You're able to query things - 'if this, then go to cue'."

The team uses a programme called Ableton Live to turn the parameters of the actor's voice into OSC commands in real time. Those commands are then sent into Eos, which applies them to parameters in the Catalyst video server. "Anything you can analyse within the sound - you can map that to anything. You could map the volume to the colour," says Murfin.

The addition of OSC capability in Eos has allowed the team to centralise their control and content creation workflow. "In the past," Murfin explains, "you would set everything up in Catalyst, and then trigger it using Eos. Now, Catalyst is managing getting the content onto the projectors, but all of the artistic side - the cueing, what content is displayed on what layer, the intensity, the colour - all those things are directly manipulated through Eos."

As technology develops, the tech process is also changing. "You've got the ability to control each other's systems, which is something we've never really had before," says Murfin. "You have to talk a lot more when you're in production, because the sound is now directly impacting the creativity of the video. If the sound turns the volume up on an output, it could change the colour of the video." With the video and the lighting integrated, Murfin adds, a single operator can run both elements of the show.

Murfin is eager to explore more applications of Eos' new OSC capability, from integration of lights, video and sound on upcoming shows to the use of custom applications that utilise OSC to interact with the lighting system. "Anyone can develop an app that can do anything. We can now have a bit of software that documents your rig which can talk directly to E


Latest Issue. . .