Journée séminaires décembre 2017

Mardi 19 décembre nous accueillons quatre orateurs pour une journée de séminaires consacrée à la vidéo immersive.
Date : 19/12/2017
Lieu : salle B567, site Barrault

Agenda :
10h-11h15 : Andrei Purica (LTCI) : View synthesis and reconstruction from compressed video

11h15-12h30 J. Li (Polytech’Nantes) : Quality of Experience in Immersive Multimedia: Challenges, Methods and Perspectives

13.30-14.45 C. Ozcinar (Trinity College Dublin) : Immersive Virtual Reality Media Communication

14.45-16.00 A. Fiandrotti (Politecnico Turin) :Immersive video communications: a perspective from the network edge

Abstracts :
1)View synthesis and reconstruction from compressed video

Following the recent “boom” in connectivity, videos became the most demanded form of multimedia, with recent studies from Cisco showing they accounted for 64% of all internet traffic in 2014, with a predicted 80% by 2020. This high demand also fueled a continuous evolution of display, transmission and compression technologies which created a situation where the end used can easily access video content from a plethora of devices with various resolutions. Furthermore, the same video sequence can be usually found with different resolutions and compression levels on various cloud multimedia service providers. In addition to evolving existing technologies, there is also a lot of interest in finding the best way to provide a so-called immersive multimedia experience. Several solutions were investigated over the past years and the Multi-View video plus Depth format was found to provide a promising solution in combination with view synthesis algorithms. In this presentation I will discuss several new approaches in view synthesis and view reconstruction and show how they are used in immersive and 2D video compression and transmission systems. First, I explore the use of temporal correlations in combination with the traditional Depth-Image-Based-Rendering techniques and propose several approaches to tackle common problems in DIBR type algorithms which are shown to improve the quality of the synthesis. I also investigate the problem of multi-source video reconstruction and propose a model based framework that uses primal dual splitting proximal convex optimization algorithms to enhance the quality and resolution of videos from multiple sources with possibly different resolutions and compression levels. A discussion on the emerging 3D 360 video formats and the relevance of synthesis methods in this scenario concludes the presentation.

2) Quality of Experience in Immersive Multimedia: Challenges, Methods and Perspectives

Immersive multimedia is aiming to improve people’s viewing experience, seeking for better immersiveness or naturalness. The development of 3DTV, Virtual Reality (VR) and Augmented Reality (AR) are recent illustrative examples of this trend. The Quality of Experience (QoE) in immersive multimedia encompasses multiple perceptual dimensions.
For instance, in 3DTV, three primary dimensions have been identified: image quality, depth quality and visual comfort. In VR/AR, the dynamic viewing and interactions with virtual world and real world are new ingredients. In this talk, focusing on the most advanced immersive multimedia technology, one basic questions about QoE is studied: “how to subjectively assess QoE reliably taking care of its multidimensional aspect?” This talk will show you why the traditional standardised 2D subjective quality assessment methods do not work anymore, and what are the possible solutions. Some applications of the proposed methodology on QoE of immersive multimedia are introduced and some interesting research directions are discussed.

3)Immersive Virtual Reality Media Communication


4) Immersive video communications: a perspective from the network edge

Recent advances in image acquisition, communication and display are fostering a revamped interest for immersive video communications. On the source side, multicamera-equipped hadheld devices and omnidirectional camcorders may be the key towards many-degrees of freedom image acquisition. On the receiver side, HDR and lighfield technologies may enable high quality, eyestrain-free, realistic image display. Existing xDSL and the future 5G standards will provide the required bandwidth-rich, low-latency, channels to residential and roaming users.
Regardless of the specific technologies that will fuel the immersive communications of tomorrow, the practical implementation of such scenario over the cloud-centric, centralized, Internet of today poses a number of issues. Increased bandwidth requirements for multiview video delivery will stretch the capabilities of core distribution networks. Computational complexity of, e.g., generating virtual perspectives may exceed the computational resources available at the receiver, other than being suboptimal from a power efficiency perspective.
In order to tackle the above issues, an appealing solution is to relocate storage and computational capabilities from the network core and from user terminals towards the network edge. Future 5G networks will likely boast unprecedented cell density, offering the opportunity of pervasively caching bandwidth-intensive contents at the base stations. At the same time, recent advances in embedded parallel computing will enable offloading a number of image processing tasks to the network edge.
This presentation will briefly outline some of the potentials of network-coding based video delivery and deep learning-based image processing for video caching and processing at the network edge, respectively, discussing some of the issues to be addressed and presenting some example applications.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>