From acquisition through to play-out and everything inbetween, Kevin Hilton explores how modern broadcasters are dealing with audio (and it’s associated data) through the entire production process.
Workflow is one of those annoying, buzzword-type terms that is all the more irritating because it does sum up the process of getting material from one end of the broadcast production chain to the other. Another annoyance is that workflows have always existed in television broadcasting and post production; today’s usage of the phrase implies that it is something new when a methodology for moving the building blocks of TV productions around – film, videotape, and audio tape as it was – had long been in place.
The main difference now, however, is that the entire process is moving towards tapeless operation, based on data files containing sound, pictures and, just as important, information on the content of each file. This is covered by another techno-term: metadata, meaning data about data. It is packaged together with the audio and video in file ‘wrappers’ so that a programme or its constituent parts can be identified easily as it travels along the workflow.
Audio is a standalone but connected part of the overall process, because it goes through its own recording/acquisition and post-production process, as well as being a component of the video stream through the increasing use of embedding (not that sound for live broadcast and recording for later post-production should be considered at different points in the chain).
Checking it twice
Sound supervisor Julian Gough, formerly with BBC OBs and then SIS LIVE and now running his own Noises Off company, says he advises clients that if they make a back-up multi-track recording at the same time as the live transmission, at no extra cost, it is possible to post-produce that at a later date if the need arises for further distribution.
“My main work is the live broadcast of basic stereo with the pictures,” he explains, “but in addition to that I can make a multi-track of everything at the time. This means there is the ability to go back and revisit any recording afterwards.” He adds that having at least two copies of something is crucial.
Gough relies on the Merging Technologies Pyramix audio editing-recording workstation, although he does admit that initially he used it only for its audio mixing capability: “I never used it just as a multi-track recorder but for creating a live mix. There is the benefit of the multi-track recorder as well, which, nine times out of ten times, will now be running in the background. So that gives the option to redo something if it was not good enough on the day.”
Pyramix offers four independent background recorders that can take four separate streams. Johan Wadsten, software products manager at Merging, explains that any of these can be taken and put into a timeline for editing while the record process is still underway without affecting the recordings. “An EDL [edit decision list] is being created on the media file while the edit is being done on the timeline,” he says.
In analogue and digital tape workflows, written cue sheets and notes were vital in not only identifying what a recording was of but also which takes were good. Just like the recording, editing, and mixing functions, this job has moved into the virtual, databased world. In the case of Pyramix, says Wadsten, this began with media-based markers that could be used to flag a good take within a file. “Now metadata is written into all files,” he says, “which allows the production notes to flow down the line to the editors. The idea is to make the process as simple as possible because losing the information is almost as bad as losing the recording itself.”
Read the rest in our February digital edition.