ÃÛÑ¿´«Ã½

  1. Try  
  2. Rate 26 ratings

How did you rate this?


Pilot ended 27th April 2017
309
Tried
26
Rated
3
Shared
Philharmonic Venue Explorer trailer
Philharmonic Venue Explorer
Control the sights and sounds of a stellar performance from the ÃÛÑ¿´«Ã½ Philharmonic Orchestra. Explore the stage, the orchestra and the music in ways you can't on TV or radio.

The Inside Story

Venue Explorer lets you interactively explore an ultra-high-definition panoramic video of an event.

What we're doing

Current broadcast coverage of live events provides an experience very different to actually being there: the views are controlled by cameramen and the director, instead of being able to look where you want. For events where there are many different things happening at the same time, we would like to give viewers the ability to look around freely. Furthermore, we would like to provide an audio mix corresponding to the part of the scene that they have chosen to look at, and provide the viewer with details relating to what they are seeing. The Venue Explorer project is looking at one way of offering this kind of service, building on our previous work on panoramic imaging in the FascinatE project, and our work on navigating around live events in the VSAR project.

How it works

An ultra-high definition video of a live scene is captured from a fixed wide-angle camera overlooking the whole event. In theory, the video could be delivered to viewers over a high-bandwidth link and displayed on a large ultra-high-definition screen, but such networks and displays are expensive and not yet commonly available. Instead, we have developed a way of displaying the image on a conventional tablet or PC web browser, and allowing the user to pan and zoom around the scene to explore the areas of most interest to them, much as they would when using a map application. This means that we only have to transmit to them the portion of the scene that they are looking at, significantly reducing the bandwidth requirements. This type of experience could work as either a stand-alone or second screen experience; a tablet is an obvious starting point for second screen applications.

To provide an audio feed of the area that the user is currently looking at, we create audio feeds relating to individual areas of the scene and an overall mix suitable for a wide view, and mix between these as the view is changed. When viewing a wide shot, the audio will convey the overall ambience, similar to what would be heard by someone in the audience. As the viewer zooms in to an area, the audio is re-mixed to be appropriate for the selected region. For an application in an athletics stadium, the audio feeds for different events could be obtained from the existing outside broadcast operation, and the ambience feed from a microphone near the camera. For an application in a music or arts event, different audio mixes could be created for different areas, using feeds from many microphones. The work on audio is being carried out by R&D’s Audio team, and forms part of our work on the ICoSOLE project.

We also acquire data relating to the scene. For an application at an athletics event, this data could include background information on various athletics events, and live information giving the latest results. For an arts event, it might include the names and biographies of actors. We use a version of the authoring tool we developed for the Augmented Video Player, modified to receive a live video input, to specify the location in the image associated with live data feeds, and also to create additional overlays manually. The user can choose to overlay this information on the image, aligned with the corresponding location, providing an ‘augmented reality’ display. This approach could in future be automated by using techniques such as object and face recognition, potentially allowing details of every athlete visible in a stadium to be made available as an overlay.

Venue Explorer is an example of one way in which broadcasting could move towards what is known as an object-based approach: current TV systems send the same audio and video to everyone, mixed by the broadcaster. In this system, the content is divided into separate ‘objects’: the video is divided up into tiles, the audio is sent as a number of separate streams relating to particular picture areas, and overlay data is sent separately, with information about the place in the image it relates to, and what kind of data it is (results, schedule, etc). The user’s application assembles these objects according to the view selected by the user.

More new ideas for you on Taster

Philharmonic Venue Explorer