Introduction to 360, VR, AR and MR.

(quick primer into terminology and technology)

With the excitement of the new upcoming FCPX update people have started to talk about the need of the new ‘VR’ functionality.
To make things short FCPX is a NLE and not a VR authoring tool, at least for now, including the next update.

FinalCutPro 10.4  is going 360 – what is the difference ?

In oder to make sense and give some insights of what is to come I have to introduce you to some maybe new terminology that is being thrown around.

Virtual Reality – VR – V stands for virtual, logic. This term refers generally to interactive content akin to gaming. You can can move around the scene, objects react to you, but VR is not limited to this, boundaries are very fluid and change by applications.

VR can related to the viewing experience, where IR – immersive reality would be a better suited term. A world you experience inside a viewing device – HMD for short (more about this in a sec.),
or a space structure where content is projected around you. Wether this a spherical, cylindrical cubic installation does not matter for the term.

Back to FinalCutPro for a moment. The upcoming 10.4 version will open the world to 360 content.

Single point, omni directional content.

Content is captured by a 360 single camera, a camera array, or CG content, again the boundaries are fluid from project to project. Others like Alex4D have written good articles in the past days outlining the significance of 360 / equirectangular post processing and its quirks.

Single point refers to a immobile camera, that captures a omni directional image. Now again, no boundaries since cameras can move – but !

With the camera movement time moves, in 360 content you cant separate the two (yet). You scrub forwards and backwards in a clip and you seem to gain spatial control, but time is locked to to the position – people walk backwards or forwards. Some clever concepts in composting can of corse over come some of this.

This is where the dividing gap between VR and 360 becomes apparent. VR content is in general dynamically generated by a real time engines like Unreal, Unity or others. Spatial movement is detached from time.

Now a bit of technical terminology to pave the ground for more interesting stuff.

HMD – Head Mounted Device. goggles , glasses… contraptions worn on the head. HTC Vice, Occulus Rift, Microsoft Hololens are all HDMs.

Now the later one is a AR/MR HDM, where the two first examples per se VR HMDs. Later statement can be false, because one could inject real time capture video and overlay it with VR content, that would make it a MR – mixed reality application. Confusing ?

IMU – now this is a handful – Inertial Measuring Unit. Why we need to know this ? Because its the device that translates movement (head movement) into POV (point of view). Most of todays devices use a accelerometer for this, akin to the ones used in phones.

IMUs are not absolute tracking devices, meaning they can only give a offset from starting point to end point. Therefore it is not possible to match content to a exact position or overlay with reality

Smarter sensor stacks solve this problem. Based on a combination IMU’s and other technology, such as the new Apple True Depth Camera. Which is not new per se – its a micro Microsoft Kinect. This devices capture images and 3D data of the environment, which can lead to extremely accurate locking of the surroundings to the content. This is achieved by sending out a ‘cloud’ of invisible infrared dots into the world. Then a special camera (TOF – time of flight) measures the time it took for each point to travel and reflect back. Giving the device a accurate point cloud of 3D data. Smart algorithms now calculate surfaces, edges, movements in order to compute content – AR / MR / XR content.

AR – augmented reality – content is superimposed.
MR- mixed reality, parts of the image are replaced or altered, skies become green.
XR – extended reality – a hybrid of all.

Now back to 10.4. It fills the first step on first glance. But this is not the full story, it is forms a vital link in the future production chain. Sky domes, spherical boundaries of worlds can be mapped with spherical content – a scene under water, in the desert, on mars. Later a real time engine can use this content and map it into a CG VR scene.

And again boundaries are up to imagination. A chroma keyed character is captured in 360 space, then used in virtual VR / AR / MR / XR scene. With the help of IMUs they are tracked and accurately mapped.

FinalCutPro 10.4 will be a key tool in this processes. Not only has it all the tools to manipulate and process the content, it has built-in HMD preview support. You can hook a HMD to your station and experience content as you edit. The IMU sends data back to FCPX for aligning the position, spatial orientation of the content.

The story will and does not end here. I could write another couple of dozens of pages where this will lead, how it meshes with other technologies. Occlusion maps, real time color transforms that analyze real time scenes and match content color correction to it. A important step to blend content seamless. Analysis of lighting conditions, not only color temperature, but rather angle of light, quality and type of lights.

I hope this helps to understand a bit where we are heading.

Leave a comment