From 966a2945383f35a0f5198b2edd553a88bfc800bb Mon Sep 17 00:00:00 2001 From: Thomas Forgione Date: Thu, 17 Oct 2019 10:24:18 +0200 Subject: [PATCH] Some fixes --- src/state-of-the-art/3d-interaction.tex | 2 +- src/state-of-the-art/3d-streaming.tex | 2 +- src/state-of-the-art/intro.tex | 7 ++++--- src/state-of-the-art/video.tex | 2 +- 4 files changed, 7 insertions(+), 6 deletions(-) diff --git a/src/state-of-the-art/3d-interaction.tex b/src/state-of-the-art/3d-interaction.tex index dfa984c..fb0ad7b 100644 --- a/src/state-of-the-art/3d-interaction.tex +++ b/src/state-of-the-art/3d-interaction.tex @@ -3,7 +3,7 @@ \section{3D Bookmarks and Navigation Aids} Devising an ergonomic technique for browsing 3D environments through a 2D interface is difficult. -Controlling the viewpoint in 3D (6 DOFs) with 2D devices is not only inherently challenging but also strongly task-dependent. In their recent review,~\citep{interaction-3d-environment} distinguish between several types of camera movements: general movements for exploration (e.g., navigation with no explicit target), targeted movements (e.g., searching and/or examining a model in detail), specified trajectory (e.g., a cinematographic camera path), etc. +Controlling the viewpoint in 3D (6 DOFs) with 2D devices is not only inherently challenging but also strongly task-dependent. In their review,~\citep{interaction-3d-environment} distinguish between several types of camera movements: general movements for exploration (e.g., navigation with no explicit target), targeted movements (e.g., searching and/or examining a model in detail), specified trajectory (e.g., a cinematographic camera path), etc. For each type of movement, specialized 3D interaction techniques can be designed. In most cases, rotating, panning, and zooming movements are required, and users are consequently forced to switch back and forth among several navigation modes, leading to interactions that are too complicated overall for a layperson. Navigation aids and smart widgets are required and subject to research efforts both in 3D companies (see \url{sketchfab.com}, \url{cl3ver.com} among others) and in academia, as reported below. diff --git a/src/state-of-the-art/3d-streaming.tex b/src/state-of-the-art/3d-streaming.tex index 95d2499..65ad84b 100644 --- a/src/state-of-the-art/3d-streaming.tex +++ b/src/state-of-the-art/3d-streaming.tex @@ -115,7 +115,7 @@ Once the set of objects that are likely to be accessed by the user is determined A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view. More recently, Google integrated Google Earth 3D module into Google Maps. -Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the vertical point of view. +Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the top-down view. Even though there are no associated publications, it seems that the interface does view dependent streaming: low resolution from the center of the point of view gets downloaded right away, and then, data farther away or higher resolution data gets downloaded. In the same vein, \citep{3d-tiles} developed 3D Tiles, is a specification for visualizing massive 3D geospatial data developed by Cesium and built on top of glTF\@. diff --git a/src/state-of-the-art/intro.tex b/src/state-of-the-art/intro.tex index df7808a..9fc4d12 100644 --- a/src/state-of-the-art/intro.tex +++ b/src/state-of-the-art/intro.tex @@ -1,4 +1,5 @@ \fresh{} -In this chapter, we present the related work on topics similar to ours. -As discussed in the previous chapter, video and 3D share many similarities and that is why this chapter will start by a review on video streaming. -Then, we proceed with presenting 3D streaming, and we end with 3D navigation. +In this chapter, we present the related work on video and 3D. +As discussed in the previous chapter, video and 3D share many similarities and that is why this chapter will start with a review on video streaming. +Then, we proceed with presenting various 3D streaming techniques, including compression, geometry and texture compromise, and viewpoint dependent streaming. +We end this end this chapter by reviewing the related work regarding 3D navigation and interfaces. diff --git a/src/state-of-the-art/video.tex b/src/state-of-the-art/video.tex index 6353d71..c96dcb1 100644 --- a/src/state-of-the-art/video.tex +++ b/src/state-of-the-art/video.tex @@ -50,7 +50,7 @@ This is one of the DASH strengths: no powerful server is required, and since sta A client typically starts by downloading the MPD file, and then proceeds on downloading segments of the different adaptation sets that he needs, estimating itself its downloading speed and choosing itself whether it needs to change representation or not. \subsection{DASH-SRD} -DASH has already been adapted in the setting of video streaming. +DASH has already been adopted in the setting of video streaming. DASH-SRD (Spatial Relationship Description,~\citep{dash-srd}) is a feature that extends the DASH standard to allow streaming only a spatial subpart of a video to a device. It works by encoding a video at multiple resolutions, and tiling the highest resolutions as shown in Figure~\ref{sota:srd-png}. That way, a client can choose to download either the low resolution of the whole video or higher resolutions of a subpart of the video.