Imagine if you were watching television coverage of a football game, where none of the cameras could zoom in. It would be pretty frustrating, just having to go from one wide shot to another, never being able to get a close look at any of the players. That’s pretty much how things are with audio, however. Unless someone has their own microphone, or is within line of sight of a parabolic mic, you’re not going to be hearing them very well. In the near future, however, that may not be the case. Norway’s Squarehead Technology has developed AudioScope, a system that allows users to acoustically “zoom in” on individual people in a large area, and follow them as they move around.
AudioScope works much like sonar does underwater. A centrally-located dish contains an array of up to 315 small microphones, each one pointed in a slightly different direction. It also contains a wide-angle camera, to display its area of coverage. Using the image from that camera on a control station, users can select which area they wish to zoom in on, and signal processing algorithms will concentrate on obtaining audio from that area. Because the sound will reach the different mics at slightly different times, their signals are automatically delayed so that they all match up.
Using a track ball, users can also follow a target on their screen, instructing the system to follow it. Even if a certain person/area wasn’t targeted in the initial coverage, they still can be when that footage is replayed, as audio from all 315 mics is stored in synchronization with the video.
While the system was initially designed with television production in mind, there is also a version designed for conferences. In automatic mode, it will zoom in on whoever is speaking, amplifying their voice for everyone else to hear – this could be particularly useful for picking up people asking questions, or lecturers who don’t want to be bound to a podium.