Agenda

Introduction to the SMFoLD Workshop

Chris Chinnock, President and Founder, Insight Media

The workshop on Streaming Media for Field of Light Displays (SMFoLD) is designed to profile the status of light field acquisition, display, streaming and interface technology as well as standards activities in these areas.  The workshop will also explore the idea of creating a consortium to help advance these discussion, education and standards efforts.

Light Field Capture Systems and Workflows for Virtual Reality and Cinema

Jon Karafin, Head of Light Field Video, Lytro and Tim Milliron, VP of Engineering, Lytro

Lytro is building the world’s most powerful Light Field imaging platform enabling artists, scientists and innovators to pursue their goals with unprecedented levels of freedom and flexibility. With the announcement of Lytro Immerge, the world’s first Light Field solution for Virtual Reality (VR), in late 2015 and followed by the April 2016 launch of Lytro Cinema, the world’s first Light Field capture system for cinematic content, Lytro is firmly at the forefront of the generational shift from legacy 2D imaging to a 3D volumetric Light Field world.  The presentation will discuss both Light Field solutions as well as the image capture workflow through file formats, metadata handling, data rates and rendering throughout the computational workflow.

Light Field Data Management and Transmission for Media Production

Siegfried Foessel, Joachim Keinert, Frederik Zilly – Fraunhofer IIS

Compared to traditional 2D video, light field video offers much greater flexibility in postproduction and visualization of captured content. Given that light fields inherently capture different viewpoints from the set, the selection which aspect of the scene will be finally presented to the user does not need to be performed during acquisition, but can be performed later on either in the post production or can even let to the user. Based on this fundamental concept, many new special effects are possible such as variable depth of field, depth based overlay of different scenes, dynamic scene browsing, or virtual camera movements.

This flexibility, however, comes along with a tremendous increase in the data volumes to capture, process and display. Compared to traditional video, a factor of nine is easily achieved and can go up to several hundreds. The proposed presentation analyses the resulting challenges for capturing, post production and display. Based on a Nuke workflow, it investigates the different data representations needed during acquisition, editing and display in order to enable novel media experiences for both traditional media delivery chains and interactive devices like head mounted displays. By these means, the presentation contributes to the collection of requirements for future standards in the domain of storage and compression.

Holographic Video Display System

Lloyd J, LaComb, Jr.[1], V. Michael Bove[2], Daniel Smalley[3]

[1] TIPD, LLC and College of Optical Sciences, University of Arizona, [2] Media Lab, Massachusetts Institute of Technology [3] College of Electrical Engineering, Brigham Young University

Command and control battle maps and mission planning can be greatly enhanced by the representation of the battlefield in three dimensions (3D). Showing perspective views, depth and occlusions allows for a better understanding of the situation, and assists in avoiding errors when interpreting the terrain.  Digital Terrain Elevation Data (DTED) maps already exist and are in use, but are projected using two-dimensional display devices which limit their effectiveness.  Streaming 3D geometric data from LIDAR and SAR is not currently available to operators, but is expected to become a critical component of battlefield planning within the next decade.  As the DoD modernizes its facilities, display systems capable of combining streaming data with existing DTED data will be required.  The ultimate goal of this research effort is to develop a full-parallax, solid-state 3D display with no mechanically moving parts. The HVD-GWSS system should be capable of achieving video-rate display speeds with electronically generated holograms, require no special viewing equipment, and be able to fuse streaming video and streaming geometric data together with existing static content.  The presentation will review the horizontal parallax only and full parallax photorefractive holographic display system, the multi-camera capture and integral imaging display system and a new full parallax system based on lithium niobate acousto-optical and electro-optical modulators capable of producing video-rate light field displays.  The bandwidth and streaming requirements for the systems will be reviewed based on existing industry standards.

Holographic Angular Slice 3D Display System and SMFoLD

C. E. (Tommy) Thomas, Jr., S. L. Kelley, P. G. Jones; Third Dimension Technologies

Third Dimension Technologies (TDT) has developed a horizontal parallax only (HPO) Holographic Angular Slice 3D (HAS3D) display. The display is an electronic version of holographic stereography.  The HAS3D display technology continuously blends multiple angular viewing perspectives from an array of projectors to create bright, full color 3D imagery at video frame rates.  The display is vergence and accommodation matched and does not have the issues with display sickness that are common with Stereo 3D, lenticular, and parallax barrier displays.  An overview of the display technology, data types used or usable by the display, and data rates required to feed the display will be given. Problems of reformatting 3D scenes originally intended to display on a 2D display so that they display well on an actual 3D display will be discussed, along with a discussion of “just in time” viewpoint injection (related to feedback from the 3D display, through the SMFoLD standard, to the source application to potentially obtain acceptable 3D scenes from the source application through the SMFoLD streaming standard).

Light-field Display Architecture and the Complexities of Light-field Rendering

Thomas Burnett, CTO, FoVI3D

The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.  A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume.  Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field.

A light-field can be described as a set of rays that pass through every point in space.  The quality/fidelity of a light-field projection is dependent on the density of rays that can be angularly distributed over a projection frustum and the ability of the optical system to preserve spatial/angular detail.

Under the DARPA Urban Photonic Sand-table Display (UPSD) program, Zebra Imaging developed a light-field display architecture and built four large light-field display prototypes for human factors evaluation and testing.  The display architecture consisted of an array of off-the-shelf CPUs/GPUs to synthetically generate a light-field from a 3D model, followed by an array of SLMs to convert the pixels into light rays, and finally, an array of micro-lenses to angularly distribute the light over a 90° projection frustum.  The result of the UPSD program was a full-parallax light-field architecture/design that was application agnostic and could be used for a variety of purposes from battlespace visualization (as depicted in most science fiction movies), medical simulation and training, to eventually home entertainment and gaming.

The computational magnitude of light-field generation is often overlooked in the development of novel display technologies and is dependent on a number of factors including model/scene complexity in terms of triangle count and organization, number of materials and render states changes, and the capabilities of the render cluster.   The choice of light-field display application interfaces can have a profound effect on the host application and the light-field visualization experience.   This talk will review the light-field display architecture developed under the UPSD project, describe the light-field computation challenge and discuss the desire for extreme multi-viewpoint rendering.

Overview of JPEG PLENO Activities

Walt Husak – Requirements Co-Chair of WG1 (JPEG)

The JPEG committee has been investigating next generation image representations that moves beyond simple planar representations.  JPEG purposely named the effort “PLENO” as a reference to “plenoptics” but also as a tribute to the Latin word for “complete.”  The general concept of the effort is creation of a mathematical representation that not only provides information about any point within a scene but also about how it changes when observed from different positions.  This presentation will provide an overview of the project, discuss the current status and expected deliverables of the JPEG PLENO program.

SMPTE CompCine Standards

Howard Lukk, Director of Engineering and Standards, SMPTE

A overview of Standards and the Standards Process as it applies to Light Field Computational Cinematography (CompCine). Starting with Stereoscopic standards and through to Depth Maps, this presentation will follow the recent evolvement of CompCine up until its present state and what could be envisioned for upcoming areas of standardization.

Toward MPEG’s Vision of Immersive Experiences

Arianne T. Hinds, Ph.D. Video & Standards Strategy, CableLabs; Chair, INCITS L3.1 MPEG Development Activity for United States

In keeping with its tradition to address industry’s needs for standards that compress, deliver, and signal visual and auditory multimedia, MPEG has launched activities to address both short term and longer term virtual reality applications, including light fields.  This talk will provide a general overview of MPEG’s vision toward developing standards that facilitate immersive experiences, in particular, for light fields.

The Light Field Streaming Ecosystem

Jules Urbach, CEO, OTOY

Most light field capture cameras today output to a 2D monitor. This doesn’t work if we want to enable light field, both volumetric and holographic, content to display in true 3D formats as well. This talk will address some of the issues associated with formatting, compression, distribution and processing for a wide range of input formats to a range of output displays and the possibility of needing to standardize one or more of these formats.

Introduction to the VR Industry Forum

David Price, Facilitator, VR Industry Forum

The VR Interest Group is changing to the VR Industry Forum.  In this presentation we will describe efforts to help in VR industry standardization, communication and commercialization.  The overlap with light fields activity and SMFoLD will be explained.

9:00 9:20 Introduction Chris Chinnock, Insight Media
9:20 9:45 Acquisition Jon Karafin, Lytro
9:45 10:10 Siegfried Foessel, Fraunhofer IIS
10:10 10:30 coffee
10:30 10:55 Displays Lloyd LaComb, Uiversity of Arizona/TIPD
10:55 11:20 Tommy Thomas, Third Dimension Technology
11:20 11:45 Thomas Burnett, FoVI3D
11:45 1:15 Lunch
1:15 1:40 Standards Walt Husak, Dolby (JPEG PLENO standardization)
1:40 2:05 Howard Lukk, SMPTE (standardization)
2:05 2:30 Arianne Hinds, CableLabs (MPEG standardization)
2:30 2:45 coffee
2:45 3:10 More Jules Urbach, OTOY
3:10 3:35 David Price, VR Industry Forum
3:35 4:30 Panel Discussion Panel discussion