- pass on chapter 2
This commit is contained in:
parent
8433ee1b76
commit
8441e91398
12
src/plan.tex
12
src/plan.tex
|
@ -1,11 +1,11 @@
|
||||||
\frontmatter{}
|
\frontmatter{}
|
||||||
\input{introduction/main}
|
%\input{introduction/main}
|
||||||
\mainmatter{}
|
\mainmatter{}
|
||||||
\input{foreword/main}
|
%\input{foreword/main}
|
||||||
\input{state-of-the-art/main}
|
\input{state-of-the-art/main}
|
||||||
\input{preliminary-work/main}
|
%\input{preliminary-work/main}
|
||||||
\input{dash-3d/main}
|
%\input{dash-3d/main}
|
||||||
\input{system-bookmarks/main}
|
%\input{system-bookmarks/main}
|
||||||
\backmatter{}
|
\backmatter{}
|
||||||
\input{conclusion/main}
|
%\input{conclusion/main}
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ This is often known as point-of-interest (POI) movement (or \textit{go-to}, \tex
|
||||||
Given such a point, the camera automatically moves from its current position to a new position that looks at the POI\@.
|
Given such a point, the camera automatically moves from its current position to a new position that looks at the POI\@.
|
||||||
One key issue of these techniques is to correctly orient the camera at destination.
|
One key issue of these techniques is to correctly orient the camera at destination.
|
||||||
In Unicam \citep{two-pointer-input}, the so-called click-to-focus strategy automatically chooses the destination viewpoint depending on 3D orientations around the contact point.
|
In Unicam \citep{two-pointer-input}, the so-called click-to-focus strategy automatically chooses the destination viewpoint depending on 3D orientations around the contact point.
|
||||||
The recent Drag'n Go interaction \citep{drag-n-go} also hits a destination point while offering control on speed and position along the camera path.
|
The more recent Drag'n Go interaction \citep{drag-n-go} also hits a destination point while offering control on speed and position along the camera path.
|
||||||
This 3D interaction is designed in the screen space (it is typically a mouse-based camera control), where cursor's movements are mapped to camera movements following the same direction as the on-screen optical-flow.
|
This 3D interaction is designed in the screen space (it is typically a mouse-based camera control), where cursor's movements are mapped to camera movements following the same direction as the on-screen optical-flow.
|
||||||
|
|
||||||
\begin{figure}[ht]
|
\begin{figure}[ht]
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
\section{3D Streaming\label{sote:3d-streaming}}
|
\section{3D Streaming\label{sote:3d-streaming}}
|
||||||
|
|
||||||
In this thesis, our objective is to stream 3D scenes.
|
In this thesis, we focus on the objective of delivering large, massive 3D scenes over the network.
|
||||||
Even though 3D streaming is not the most researched field, special attention has been brought to 3D content compression, in particular progressive compression which is a premise for 3D streaming.
|
While 3D streaming is not the most standard field of research, there has been a special attention around 3D content compression, in particular progressive compression which can be considered a premise for 3D streaming.
|
||||||
In the next sections, we review the 3D streaming related work, from 3D compression and structuring to 3D interaction.
|
In the next sections, we review the 3D streaming related work, from 3D compression and structuring to 3D interaction.
|
||||||
|
|
||||||
\subsection{Compression and structuring}
|
\subsection{Compression and structuring}
|
||||||
|
@ -10,7 +10,7 @@ According to \citep{maglo20153d}, mesh compression can be divided into four cate
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item single-rate mesh compression, seeking to reduce the size of a mesh;
|
\item single-rate mesh compression, seeking to reduce the size of a mesh;
|
||||||
\item progressive mesh compression, encoding meshes in many levels of resolution that can be downloaded and rendered one after the other;
|
\item progressive mesh compression, encoding meshes in many levels of resolution that can be downloaded and rendered one after the other;
|
||||||
\item random accessible mesh compression, where parts of the models can be decoded in an arbitrary order;
|
\item random accessible mesh compression, where different parts of the models can be decoded in an arbitrary order;
|
||||||
\item mesh sequence compression, compressing mesh animations.
|
\item mesh sequence compression, compressing mesh animations.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ Since our objective is to stream 3D static scenes, single-rate mesh and mesh seq
|
||||||
This section thus focuses on progressive meshes and random accessible mesh compression.
|
This section thus focuses on progressive meshes and random accessible mesh compression.
|
||||||
|
|
||||||
Progressive meshes were introduced in~\citep{progressive-meshes} and allow a progressive transmission of a mesh by sending a low resolution mesh first, called \emph{base mesh}, and then transmitting detail information that a client can use to increase the resolution.
|
Progressive meshes were introduced in~\citep{progressive-meshes} and allow a progressive transmission of a mesh by sending a low resolution mesh first, called \emph{base mesh}, and then transmitting detail information that a client can use to increase the resolution.
|
||||||
To do so, an algorithm, called \emph{decimation algorithm}, starts from the original full resolution mesh and iteratively removes vertices and faces by merging vertices through the so called \emph{edge collapse} operation (Figure~\ref{sote:progressive-scheme}).
|
To do so, an algorithm, called \emph{decimation algorithm}, starts from the original full resolution mesh and iteratively removes vertices and faces by merging vertices through the so-called \emph{edge collapse} operation (Figure~\ref{sote:progressive-scheme}).
|
||||||
|
|
||||||
\begin{figure}[ht]
|
\begin{figure}[ht]
|
||||||
\centering
|
\centering
|
||||||
|
@ -75,8 +75,8 @@ To do so, an algorithm, called \emph{decimation algorithm}, starts from the orig
|
||||||
\caption{Vertex split and edge collapse\label{sote:progressive-scheme}}
|
\caption{Vertex split and edge collapse\label{sote:progressive-scheme}}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
Every time two vertices are merged, a vertex and two faces are removed from the original mesh, decreasing the resolution of the model.
|
Every time two vertices are merged, a vertex and two faces are removed from the original mesh, decreasing the model resolution.
|
||||||
After content preparation, the mesh consists in a base mesh and a sequence of partially ordered edge split operations.
|
At the end of this content preparation phase, the mesh has been reorganized into a base mesh and a sequence of partially ordered edge split operations.
|
||||||
Thus, a client can start by downloading the base mesh, display it to the user, and keep downloading refinement operations (vertex splits) and display details as time goes by.
|
Thus, a client can start by downloading the base mesh, display it to the user, and keep downloading refinement operations (vertex splits) and display details as time goes by.
|
||||||
This process reduces the time a user has to wait before seeing a downloaded 3D object, thus increases the quality of experience.
|
This process reduces the time a user has to wait before seeing a downloaded 3D object, thus increases the quality of experience.
|
||||||
|
|
||||||
|
@ -86,7 +86,7 @@ This process reduces the time a user has to wait before seeing a downloaded 3D o
|
||||||
\caption{Four levels of resolution of a mesh~\citep{progressive-meshes}}
|
\caption{Four levels of resolution of a mesh~\citep{progressive-meshes}}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
These methods have been vastly researched \citep{bayazit20093,mamou2010shape}, but very few of these methods can handle meshes with attributes, such as texture coordinates.
|
%These methods have been vastly researched \citep{bayazit20093,mamou2010shape}, but very few of these methods can handle meshes with attributes, such as texture coordinates.
|
||||||
|
|
||||||
\citep{streaming-compressed-webgl} develop a dedicated progressive compression algorithm based on iterative decimation, for efficient decoding, in order to be usable on web clients.
|
\citep{streaming-compressed-webgl} develop a dedicated progressive compression algorithm based on iterative decimation, for efficient decoding, in order to be usable on web clients.
|
||||||
With the same objective, \citep{pop-buffer} proposes pop buffer, a progressive compression method based on quantization that allows efficient decoding.
|
With the same objective, \citep{pop-buffer} proposes pop buffer, a progressive compression method based on quantization that allows efficient decoding.
|
||||||
|
@ -96,7 +96,7 @@ In \citep{batched-multi-triangulation}, the authors propose Nexus: a GPU optimiz
|
||||||
It is notably used in 3DHOP (3D Heritage Online Presenter, \citep{3dhop}), a framework to easily build web interfaces to present 3D objects to users in the context of cultural heritage.
|
It is notably used in 3DHOP (3D Heritage Online Presenter, \citep{3dhop}), a framework to easily build web interfaces to present 3D objects to users in the context of cultural heritage.
|
||||||
|
|
||||||
Each of these approaches define its own compression and coding for a single mesh.
|
Each of these approaches define its own compression and coding for a single mesh.
|
||||||
However, users are frequently interested in scenes that contain many meshes, and the need to structure content emerged.
|
However, users are often interested in scenes that contain multiple meshes, and the need to structure content emerged.
|
||||||
|
|
||||||
To answer those issues, the Khronos group proposed a generic format called glTF (GL Transmission Format,~\citep{gltf}) to handle all types of 3D content representations: point clouds, meshes, animated model, etc.\
|
To answer those issues, the Khronos group proposed a generic format called glTF (GL Transmission Format,~\citep{gltf}) to handle all types of 3D content representations: point clouds, meshes, animated model, etc.\
|
||||||
glTF is based on a JSON file, which encodes the structure of a scene of 3D objects.
|
glTF is based on a JSON file, which encodes the structure of a scene of 3D objects.
|
||||||
|
@ -119,25 +119,25 @@ Their approach works well for several objects, but does not handle view-dependen
|
||||||
\subsection{Viewpoint dependency}
|
\subsection{Viewpoint dependency}
|
||||||
|
|
||||||
3D streaming means that content is downloaded while the user is interacting with the 3D object.
|
3D streaming means that content is downloaded while the user is interacting with the 3D object.
|
||||||
In terms of quality of experience, it is desirable that the downloaded content is visible to the user.
|
In terms of quality of experience, it is desirable that the downloaded content falls into the user's field of view.
|
||||||
This means that the progressive compression must allow a decoder to choose what it needs to decode, and to guess what it needs to decode according to the users point of view.
|
This means that the progressive compression must encode a spatial information in order to allow the decoder to determine content adapted to its viewpoint.
|
||||||
This is typically called \emph{random accessible mesh compression}.
|
This is typically called \emph{random accessible mesh compression}.
|
||||||
\citep{maglo2013pomar} is such an example of random accessible progressive mesh compression.
|
\citep{maglo2013pomar} is such an example of random accessible progressive mesh compression.
|
||||||
\citep{cheng2008receiver} proposes a receiver driven way of achieving viewpoint dependency with progressive mesh: the client starts by downloading the base mesh, and then is able to estimate the importance of vertex splits and choose which ones to download.
|
\citep{cheng2008receiver} proposes a receiver driven way of achieving viewpoint dependency with progressive mesh: the client starts by downloading the base mesh, and from then is able to estimate the importance of the different vertex splits, in order to choose which ones to download.
|
||||||
Doing so reduces drastically the server computational load, since it only has to send data, and improves the scalability of this framework.
|
Doing so drastically reduces the server computational load, since it only has to send data, and improves the scalability of this framework.
|
||||||
|
|
||||||
In the case of streaming a large 3D scene, viewpoint dependent streaming is a must-have: a user will only be seeing one small portion of the scene at each time, and a system that does not adapt its streaming to the user's point of view is bound to have poor quality of experience.
|
In the case of streaming a large 3D scene, view-dependent streaming is fundamental: a user will only be seeing one small portion of the scene at each time, and a system that does not adapt its streaming to the user's point of view is bound to induce a low quality of experience.
|
||||||
|
|
||||||
A simple way to implement viewpoint dependency is to access the content near the user's camera.
|
A simple way to implement viewpoint dependency is to request the content that is spatially close to the user's camera.
|
||||||
This approach, implemented in Second Life and several other NVEs (e.g.,~\citep{peer-texture-streaming}), only depends on the location of the avatar, not on its viewing direction.
|
This approach, implemented in Second Life and several other NVEs (e.g.,~\citep{peer-texture-streaming}), only depends on the location of the avatar, not on its viewing direction.
|
||||||
It exploits spatial coherence and works well for any continuous movement of the user, including turning.
|
It exploits spatial coherence and works well for any continuous movement of the user, including turning.
|
||||||
Once the set of objects that are likely to be accessed by the user is determined, the next question is in what order should these objects be retrieved.
|
Once the set of objects that are likely to be accessed by the user is determined, the next question is in what order should these objects be retrieved.
|
||||||
A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view.
|
A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view.
|
||||||
|
|
||||||
More recently, Google integrated Google Earth 3D module into Google Maps.
|
More recently, Google integrated Google Earth 3D module into Google Maps.
|
||||||
Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the top-down view.
|
Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the aerial view.
|
||||||
Even though there are no associated publications, it seems that the interface does view dependent streaming: low resolution from the center of the point of view gets downloaded right away, and then, data farther away or higher resolution data gets downloaded since it appears at a later time.
|
Even though there are no associated publications to support this assertion, it seems clear that the streaming is view-dependent: low resolution from the center of the point of view gets downloaded first, and higher resolution data gets downloaded for closer objects than for distant ones.
|
||||||
The choice of the nearby can be based based on an a priori, discretized, partitioned version of the environment; for example, \citep{3d-tiles} developed 3D Tiles, a specification for visualizing massive 3D geospatial data developed by Cesium and built on top of glTF\@.
|
Determining the nearby content can be based on an a priori, discretized, partitioned version of the environment; for example, \citep{3d-tiles} developed 3D Tiles, a specification for visualizing massive 3D geospatial data developed by Cesium and built on top of glTF\@.
|
||||||
Their main goal is to display 3D objects on top of regular maps.
|
Their main goal is to display 3D objects on top of regular maps.
|
||||||
|
|
||||||
\begin{figure}[ht]
|
\begin{figure}[ht]
|
||||||
|
@ -149,19 +149,19 @@ Their main goal is to display 3D objects on top of regular maps.
|
||||||
\subsection{Geometry and textures}
|
\subsection{Geometry and textures}
|
||||||
|
|
||||||
As discussed in Chapter~\ref{f:3d}, most 3D scenes consists in two main types of data: geometry and textures.
|
As discussed in Chapter~\ref{f:3d}, most 3D scenes consists in two main types of data: geometry and textures.
|
||||||
When addressing 3D streaming, one must handle the concurrency between geometry and textures, and a system needs to solve this compromise.
|
When addressing 3D streaming, one must handle the concurrency between geometry and textures, and the system needs to address this compromise.
|
||||||
|
|
||||||
Balancing between streaming of geometry and texture data is addressed by~\citep{batex3},~\citep{visual-quality-assessment}, and~\citep{mesh-texture-multiplexing}.
|
Balancing between streaming of geometry and texture data is addressed by~\citep{batex3},~\citep{visual-quality-assessment}, and~\citep{mesh-texture-multiplexing}.
|
||||||
Their approaches combine the distortion caused by having lower resolution meshes and textures into a single view independent metric.
|
Their approaches combine the distortion caused by having lower resolution meshes and textures into a single view independent metric.
|
||||||
\citep{progressive-compression-textured-meshes} also deals with the geometry / texture compromise.
|
\citep{progressive-compression-textured-meshes} also deals with the geometry / texture compromise.
|
||||||
This work designs a cost driven framework for 3D data compression, both in terms of geometry and textures.
|
This work designs a cost driven framework for 3D data compression, both in terms of geometry and textures.
|
||||||
This framework generates an atlas for textures that enables efficient compression and multiresolution scheme.
|
Te authors generate an atlas for textures that enables efficient compression and multiresolution scheme.
|
||||||
All four works considered a single, manifold textured mesh model with progressive meshes, and are not applicable in our work since we deal with large and potentially non-manifold scenes.
|
All four works considered a single, manifold textured mesh model with progressive meshes, and are not applicable in our work since we deal with large and potentially non-manifold scenes.
|
||||||
|
|
||||||
Regarding texture streaming, \citep{simon2019streaming} propose a way to stream a set of textures by encoding the textures into a video.
|
Regarding texture streaming, \citep{simon2019streaming} propose a way to stream a set of textures by encoding them into a video.
|
||||||
Each texture is segmented into tiles of a fixed size.
|
Each texture is segmented into tiles of a fixed size.
|
||||||
Those tiles are then ordered to minimise dissimilarities between consecutive tiles, and encoded as a video.
|
Those tiles are then ordered to minimise dissimilarities between consecutive tiles, and encoded as a video.
|
||||||
By benefiting from the video compression techniques, they are able to reach a better rate-distortion ratio than webp, which is the new standard for texture transmission, and jpeg.
|
By benefiting from the video compression techniques, the authors are able to reach a better rate-distortion ratio than webp, which is the new standard for texture transmission, and jpeg.
|
||||||
However, the geometry / texture compromise is not the point of that paper.
|
However, the geometry / texture compromise is not the point of that paper.
|
||||||
|
|
||||||
This thesis proposes a scalable streaming framework for large textured 3D scenes based on DASH, like~\citep{zampoglou}, but featuring a space partitioning of scenes in order to provide viewpoint dependent streaming.
|
This thesis proposes a scalable streaming framework for large textured 3D scenes based on DASH, like~\citep{zampoglou}, but featuring a space partitioning of scenes in order to provide viewpoint dependent streaming.
|
||||||
|
|
Loading…
Reference in New Issue