SMFoLD Workshop Very Well Received

We had a great workshop. Over 90 industry leaders signed up for the SMFoLD workshop and attendance was high. The discussion was lively and many ideas were floated.  We heard back from a number of participants that they were very pleased with the workshop and that they learned a lot about the status of the light field ecosystem.  The only complaint we heard was that the event was not well enough publicized as many other would have attended.  But in fact, the modest promotion was on purpose as our goal was to have a smaller forum to foster better information interchange – which we accomplished.  Below is a brief synopsis of the presentations delivered at the show.

Chris Chinnock of Insight Media gave an introduction and served as moderator. The introduction presented an overview of the purpose of the workshop and the issues identified by the primary sponsor, the Air Force Research Laboratory (AFRL). The attendees were asked to consider how to address these issues, such as the lack of a light field streaming media standard and how best to facilitate development.

Following Chris’ introduction was a presentation by Jon Karafin of Lytro. The Lytro light field camera Jon described clear piqued the interest of many of the attendees.  It captures a large light field that allows directors to change a scene’s characteristics in post-production. The camera is a form of a plenoptic camera and captures all of the 3D data within its field of view.  He described a cloud-based post production workflow that allows content creators to change the focal plane, depth of field and eye point without having to re shoot the scene.  The workflow also builds a complete 3D model of the scene allowing easy interaction with special effects to completely change the look of the acquired scene, if needed.  The data set for a single shot is much too large to stream in near real time with current technology, however. Jon’s presentation seemed to be the most anticipated of the workshop.

Siegfried Foessel of Fraunhofer IIS then discussed the advantages of 3D capture and the challenges created by the large data set required to record video in 3D. His team is working with an array of cameras  he described their findings for using different data representations for recording, manipulating, editing, and ultimately viewing 3D video.

After a brief break for coffee, Lloyd LaComb gave a very entertaining presentation showing a couple of methods for creating light field displays. He and his team have created a full parallax photorefractive holographic display system that uses a special photorefractive polymer. Another approach he described uses a technique developed at MIT to produce a horizontal parallax only (HPO) display using acousto-optic modulators. His team achieved full parallax with the addition of electro-optic modulators for the vertical parallax. He also reiterated the problems identified in the first two presentations in that the amount of data required for a true 3D visual experience is far more than can be streamed today – 115 Tbps!

Another 3D display technology was presented by C. E. (Tommy) Thomas of Third Dimension Technologies (TDT). TDT’s system is described as a Holographic Angular Slice 3D (HAS3D) display. The technology is based on multiple projectors and an asymmetric diffuser that directs light to the viewer in such a way that each eye receives images from a different projector and each projector is showing the scene from a different angle. Since this display is HPO the data requirements are much smaller. Providing full parallax takes the square of the number of views required for an HPO system. For example an HPO display with 20 projectors would have to scale to 400 projectors to provide full parallax. TDT will be showing their display at IITSEC (booth 2848) November 28, 2016 through December 1, 2016.

The last of the display technologies previewed was the full parallax system from FoVI3D presented by Thomas Burnett.  Thomas led the audience through the history of FoVI3D and the development stages of their display. He also explained in some detail the need for 3D in understanding the world around us. He went on to describe FoVI3D’s suggestion for a first draft solution based on a graphics primitive, simplified OpenGL API similar to what TDT would like to see. The FoVI3D display is a significant technological breakthrough and they have some very interesting ideas for the future.

After lunch there were three presentations from existing standards bodies, Walt Husak with the JPEG Pleno group, Howard Lukk of SMPTE, and Arianne Hinds representing the MPEG group. All three presentations reported on the current state of standards development in areas related to light fields.

Walt provided an overview of some of the work the JPEG Pleno group has accomplished and emphasized the need to start small. His suggestion is that anybody attempting to develop a streaming standard start with the most simple use cases and build upon that foundation.

The SMPTE perspective was provided by Howard Lukk. Howard gave an overview of the SMPTE organization and the procedures for adopting a new standard. He introduced a new name for the light field standards being developed, Light Field Computational Cinematography or “CompCine”. He also provided a vision for what future standards might look like.

MPEG reports making good progress towards standards to support Virtual Reality with a short term focus on 360° video with 3 degrees of freedom (DoF). In her presentation, Arianne Hinds described the MPEG’s long term goal of supporting video with 6 DoF. Currently the group is working on point cloud distribution.

Jules Urbach, CEO of OTOY described a complete system for generating data in multiple formats and converting the data to formats that can be transmitted and displayed using a variety of techniques. He emphasized the use of mesh data along with textures and shader technologies to provide enough realism for a very convincing immersive experience.  He is also proposing to offer this technology the JPEG and MPEG to consider as a light field streaming media standard.

The last presentation of the workshop was given by David Price of the VR Interest Group.  He describe the mission and activities of this organization, which will soon morph into a more formal VR Industry Forum.  There are many tasks ongoing in this group and there is clear overlap with activities in light fields as well.

After the presentations and a final coffee break the presenters sat down for a Q&A session and discussion of the principle obstacles to achieving the goal of live streaming of 3D video. The general consensus seemed to be that the amount of data required to reproduce 3D scenery is the biggest hurdle. A lot of work has yet to be done to find ways of storing and transmitting the data needed to represent the real world with sufficient accuracy. Online gaming systems depend on downloading models and storing the data locally for reuse. The gaming paradigm is not suitable for live video, however. Also, the needs of the broadcast industry do not necessarily align with the needs of the military or other big data interests. More discussion and collaboration is needed.

The general conclusion is that there should be a group formed to coordinate the efforts of the various stake holders and existing standards bodies to ultimately bring about the dream of streaming true 3D visualization in the marketplace.  We hope to have more news on this front soon.

About

ILoveCheese#1&2)&+