Realtime 3D 360-degree Telepresence with Deep-learning-based
Tamay Aykut, Jingyi Xu, Eckehard Steinbach
IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019
The acceptance and dissemination of 360° telepresence systems are severely restricted by the appearance of motion sickness. Such systems consist of a client-side, where the user wears a Head-Mounted Display (HMD), a server-side, which provides a 3D 360° visual representation of the remote scene, and a communication network in between. Due to the physically unavoidable latency, there is often a noticeable lag between head motion and visual response. If the sensory information from the visual system are not consistent with the perceived ego-motion of the user, the emergence of visual discomfort is inevitable. In this paper, we present a delay- compensating 3D 360° vision system, which provides omnistereoscopic vision and a significant reduction of the perceived latency. We formally describe the underlying problem and provide an algebraic description of the amount of achievable delay compensation both for perspective and fisheye camera systems, considering all three degrees of freedom. We further propose a generic approach, which is agnostic to the underlying camera system. Additionally, a novel deep-learningbased head motion prediction algorithm is presented to further improve the compensation rate. Using the naive approach, where no compensation and prediction is applied, we obtain a mean compensation rate of 72.8% for investigated latencies between 0.1-1.0s. Our proposed generic delay-compensation approach, combined with our novel deep-learning-based head-motion prediction approach, manages to achieve a mean compensation rate of 97.3%. The proposed technique also substantially outperforms prior head-motion prediction techniques, both for traditional and deep learning-based methods.