Update
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 1.5 MiB |
After Width: | Height: | Size: 2.1 MiB |
After Width: | Height: | Size: 836 KiB |
After Width: | Height: | Size: 2.1 MiB |
After Width: | Height: | Size: 657 KiB |
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 4.8 MiB |
69
src/bib.bib
|
@ -858,3 +858,72 @@
|
||||||
year={2011},
|
year={2011},
|
||||||
organization={ACM}
|
organization={ACM}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@article{streaming-hlod,
|
||||||
|
title={Streaming HLODs: an out-of-core viewer for network visualization of huge polygon models},
|
||||||
|
author={Guthe, Michael and Klein, Reinhard},
|
||||||
|
journal={Computers \& Graphics},
|
||||||
|
volume={28},
|
||||||
|
number={1},
|
||||||
|
pages={43--50},
|
||||||
|
year={2004},
|
||||||
|
publisher={Elsevier}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@inproceedings{hlod,
|
||||||
|
title={HLODs for faster display of large static and dynamic environments},
|
||||||
|
author={Erikson, Carl and Manocha, Dinesh and Baxter III, William V},
|
||||||
|
booktitle={Proceedings of the 2001 symposium on Interactive 3D graphics},
|
||||||
|
pages={111--120},
|
||||||
|
year={2001},
|
||||||
|
organization={ACM}
|
||||||
|
}
|
||||||
|
|
||||||
|
@techreport{lod,
|
||||||
|
title={Real-time, continuous level of detail rendering of height fields},
|
||||||
|
author={Lindstrom, Peter and Koller, David and Ribarsky, William and Hodges, Larry F and Faust, Nick L and Turner, Gregory},
|
||||||
|
year={1996},
|
||||||
|
institution={Georgia Institute of Technology}
|
||||||
|
}
|
||||||
|
|
||||||
|
@article{game-on-demand,
|
||||||
|
title={Game-on-demand:: An online game engine based on geometry streaming},
|
||||||
|
author={Li, Frederick WB and Lau, Rynson WH and Kilis, Danny and Li, Lewis WF},
|
||||||
|
journal={ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)},
|
||||||
|
volume={7},
|
||||||
|
number={3},
|
||||||
|
pages={19},
|
||||||
|
year={2011},
|
||||||
|
publisher={ACM}
|
||||||
|
}
|
||||||
|
|
||||||
|
@inproceedings{hoppe-lod,
|
||||||
|
title={Smooth view-dependent level-of-detail control and its application to terrain rendering},
|
||||||
|
author={Hoppe, Hugues},
|
||||||
|
booktitle={Proceedings Visualization'98 (Cat. No. 98CB36276)},
|
||||||
|
pages={35--42},
|
||||||
|
year={1998},
|
||||||
|
organization={IEEE}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@inproceedings{view-dependent-lod,
|
||||||
|
title={Streaming transmission of point-sampled geometry based on view-dependent level-of-detail},
|
||||||
|
author={Meng, Fang and Zha, Hongbin},
|
||||||
|
booktitle={Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings.},
|
||||||
|
pages={466--473},
|
||||||
|
year={2003},
|
||||||
|
organization={IEEE}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@inproceedings{mipmap-streaming,
|
||||||
|
title={Remote rendering of massively textured 3D scenes through progressive texture maps},
|
||||||
|
author={Marvie, Jean Eudes and Bouatouch, Kadi},
|
||||||
|
booktitle={The 3rd IASTED conference on Visualisation, Imaging and Image Processing},
|
||||||
|
volume={2},
|
||||||
|
pages={756--761},
|
||||||
|
year={2003}
|
||||||
|
}
|
||||||
|
|
|
@ -1,9 +1,11 @@
|
||||||
\section{Contributions}
|
\section{Contributions}
|
||||||
|
|
||||||
In this thesis, we have presented three main contributions.
|
In this thesis, we attempted to answer four main problems: \textbf{the content preparation}, \textbf{the streaming policy and its relation to the user's interaction}, \textbf{the evaluation}, and the \textbf{implementation}.
|
||||||
|
To answer those problems, we presented three main contributions.
|
||||||
|
|
||||||
\paragraph{}
|
\paragraph{}
|
||||||
First, we set up a basic system allowing navigation in a 3D scene (represented as a textured mesh) with the content being streamed through the network from a remote server.
|
Our first contribution analyses the links between the streaming policy and the user's interaction.
|
||||||
|
We set up a basic system allowing navigation in a 3D scene (represented as a textured mesh) with the content being streamed through the network from a remote server.
|
||||||
We developed a navigation aid in the form of \textbf{3D bookmarks}, and we conducted a user study to analyse its impact on navigation and streaming.
|
We developed a navigation aid in the form of \textbf{3D bookmarks}, and we conducted a user study to analyse its impact on navigation and streaming.
|
||||||
On one hand, consistently with the state of the art, we observed that navigation aid \textbf{helps people navigating in a scene}, since they perform tasks faster and more easily.
|
On one hand, consistently with the state of the art, we observed that navigation aid \textbf{helps people navigating in a scene}, since they perform tasks faster and more easily.
|
||||||
On the other hand, we showed that benefiting from bookmarks in 3D navigation comes at the cost of a negative impact on the quality of service (QoS): since users navigate faster, they require more data during the same time span.
|
On the other hand, we showed that benefiting from bookmarks in 3D navigation comes at the cost of a negative impact on the quality of service (QoS): since users navigate faster, they require more data during the same time span.
|
||||||
|
@ -12,8 +14,8 @@ Simulations on the traces we collected during the user study quantify how these
|
||||||
This work has been published at the ACM MMSys conference in 2016~\citep{bookmarks-impact}.
|
This work has been published at the ACM MMSys conference in 2016~\citep{bookmarks-impact}.
|
||||||
|
|
||||||
\paragraph{}
|
\paragraph{}
|
||||||
After studying the interactive aspect of 3D navigation, we proposed a contribution focusing on the streaming aspect of such a system.
|
After studying the interactive aspect of 3D navigation, we proposed a contribution focusing on the content preparation and the streaming policies of such a system.
|
||||||
The objective of this contribution wass to introduce a system able to perform \textbf{scalable, view-dependent 3D streaming}.
|
The objective of this contribution was to introduce a system able to perform \textbf{scalable, view-dependent 3D streaming}.
|
||||||
This new framework brings many improvements upon the basic system described in our first contribution: support for texture, externalisation of necessary computations from the server to the clients, support for multi-resolution textures, rendering performances considerations.
|
This new framework brings many improvements upon the basic system described in our first contribution: support for texture, externalisation of necessary computations from the server to the clients, support for multi-resolution textures, rendering performances considerations.
|
||||||
We drew massive inspiration from the DASH technology, a standard for video streaming used for its scalability and its adaptability.
|
We drew massive inspiration from the DASH technology, a standard for video streaming used for its scalability and its adaptability.
|
||||||
We exploit the fact that DASH is made to be content agnostic to fit 3D content into its structure.
|
We exploit the fact that DASH is made to be content agnostic to fit 3D content into its structure.
|
||||||
|
|
|
@ -8,9 +8,32 @@ We now describe our setup and the data we use in our experiments. We present an
|
||||||
We use a city model of the Marina Bay area in Singapore in our experiments.
|
We use a city model of the Marina Bay area in Singapore in our experiments.
|
||||||
The model came in 3DS Max format and has been converted into Wavefront OBJ format before the processing described in Section~\ref{d3:dash-3d}.
|
The model came in 3DS Max format and has been converted into Wavefront OBJ format before the processing described in Section~\ref{d3:dash-3d}.
|
||||||
The converted model has 387,551 vertices and 552,118 faces.
|
The converted model has 387,551 vertices and 552,118 faces.
|
||||||
Table~\ref{d3:size} gives some general information about the model.
|
Table~\ref{d3:size} gives some general information about the model and Figure~\ref{d3:heterogeneity} illustrates the heterogeneity of our model (wireframe rendering is used to illustrate the heterogeneity of the geometry complexity).
|
||||||
We partition the geometry into a k-$d$ tree until the leafs have less than 10000 faces, which gives us 64 adaptation sets, plus one containing the large faces.
|
We partition the geometry into a k-$d$ tree until the leafs have less than 10000 faces, which gives us 64 adaptation sets, plus one containing the large faces.
|
||||||
|
|
||||||
|
\begin{figure}[th]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[b]{0.4\textwidth}
|
||||||
|
\includegraphics[width=\textwidth]{assets/dash-3d/heterogeneity/low-res-wire.png}
|
||||||
|
\caption{Low resolution geometry}
|
||||||
|
\end{subfigure}~%
|
||||||
|
\begin{subfigure}[b]{0.4\textwidth}
|
||||||
|
\includegraphics[width=\textwidth]{assets/dash-3d/heterogeneity/high-res-wire.png}
|
||||||
|
\caption{High resolution geometry}
|
||||||
|
\end{subfigure}
|
||||||
|
\\
|
||||||
|
\begin{subfigure}[b]{0.4\textwidth}
|
||||||
|
\includegraphics[width=\textwidth]{assets/dash-3d/heterogeneity/no-textures.png}
|
||||||
|
\caption{Simplistic textures replicated}
|
||||||
|
\end{subfigure}~%
|
||||||
|
\begin{subfigure}[b]{0.4\textwidth}
|
||||||
|
\includegraphics[width=\textwidth]{assets/dash-3d/heterogeneity/high-res-textures.png}
|
||||||
|
\caption{Detailed textures}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Illustration of the heterogeneity of the model\label{d3:heterogeneity}}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
|
||||||
\begin{table}[th]
|
\begin{table}[th]
|
||||||
\centering
|
\centering
|
||||||
\begin{tabular}{ll}
|
\begin{tabular}{ll}
|
||||||
|
|
|
@ -134,11 +134,25 @@ It exploits spatial coherence and works well for any continuous movement of the
|
||||||
Once the set of objects that are likely to be accessed by the user is determined, the next question is in what order should these objects be retrieved.
|
Once the set of objects that are likely to be accessed by the user is determined, the next question is in what order should these objects be retrieved.
|
||||||
A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view.
|
A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view.
|
||||||
|
|
||||||
More recently, Google integrated Google Earth 3D module into Google Maps.
|
More recently, Google integrated Google Earth 3D module into Google Maps (Figure~\ref{sota:google-maps}).
|
||||||
Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the aerial view.
|
Users are now able to go to Google Maps, and click the 3D button which shifts the camera from the aerial view.
|
||||||
Even though there are no associated publications to support this assertion, it seems clear that the streaming is view-dependent: low resolution from the center of the point of view gets downloaded first, and higher resolution data gets downloaded for closer objects than for distant ones.
|
Even though there are no associated publications to support this assertion, it seems clear that the streaming is view-dependent: low resolution from the center of the point of view gets downloaded first, and higher resolution data gets downloaded for closer objects than for distant ones.
|
||||||
Determining the nearby content can be based on an a priori, discretized, partitioned version of the environment; for example, \citep{3d-tiles} developed 3D Tiles, a specification for visualizing massive 3D geospatial data developed by Cesium and built on top of glTF\@.
|
|
||||||
Their main goal is to display 3D objects on top of regular maps.
|
\begin{figure}[h]
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=0.8\textwidth]{assets/state-of-the-art/3d-streaming/googlemaps.png}
|
||||||
|
\caption{Screenshot of the 3D interface of Google Maps\label{sota:google-maps}}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
Other approaches use level of details.
|
||||||
|
Level of details have been initially used for efficient 3D rendering~\citep{lod}.
|
||||||
|
When the change from one level of detail to another is direct, it can create visual discomfort to the user.
|
||||||
|
This is called the \emph{popping effect} and level of details have the advantage of enabling techniques, such as geomorhping \citep{hoppe-lod}, to transition smoothly from one level of detail to another.
|
||||||
|
Level of details have then been used for 3D streaming.
|
||||||
|
For example, \citep{streaming-hlod} propose an out-of-core viewer for remote model visualisation based by adapting hierarchical level of details~\cite{hlod} to the context of 3D streaming.
|
||||||
|
Level of details can also be used to perform viewpoint dependant streaming, such as \citep{view-dependent-lod}.
|
||||||
|
Another example is 3D Tiles \citep{3d-tiles}, which is a specification for visualizing massive 3D geospatial data developed by Cesium and built on top of glTF\@.
|
||||||
|
Their main goal is to display 3D objects on top of regular maps, and their visualisation consists in a top-down view, whereas we seek to let users freely navigate in our scenes, whether it be flying over the scene or moving along the roads.
|
||||||
|
|
||||||
\begin{figure}[ht]
|
\begin{figure}[ht]
|
||||||
\centering
|
\centering
|
||||||
|
@ -146,6 +160,21 @@ Their main goal is to display 3D objects on top of regular maps.
|
||||||
\caption{Screenshot of 3D Tiles interface~\citep{3d-tiles}}
|
\caption{Screenshot of 3D Tiles interface~\citep{3d-tiles}}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{Texture streaming}
|
||||||
|
|
||||||
|
In order to increase the texture rendering speed, a common technique is the \emph{mipmapping} technique.
|
||||||
|
It consists in generating progressively lower resolutions of an initial texture.
|
||||||
|
Lower resolutions of the textures are used for polygons which are far away from the camera, and higher resolutions for polygons closer to the camera.
|
||||||
|
Not only this reduces the time needed to render the polygons, but it can also reduce the aliasing effect.
|
||||||
|
Using these lower resolutions can be especially interesting for streaming.
|
||||||
|
\citep{mipmap-streaming} proposes the PTM format which encode the mipmap levels of a texture that can be downloaded progressively, so that a lower resolution can be shown to the user while the higher resolutions are being downloaded.
|
||||||
|
|
||||||
|
Since 3D data can contain many textures, \citep{simon2019streaming} propose a way to stream a set of textures by encoding them into a video.
|
||||||
|
Each texture is segmented into tiles of a fixed size.
|
||||||
|
Those tiles are then ordered to minimise dissimilarities between consecutive tiles, and encoded as a video.
|
||||||
|
By benefiting from the video compression techniques, the authors are able to reach a better rate-distortion ratio than webp, which is the new standard for texture transmission, and jpeg.
|
||||||
|
However, the geometry / texture compromise is not the point of that paper.
|
||||||
|
|
||||||
\subsection{Geometry and textures}
|
\subsection{Geometry and textures}
|
||||||
|
|
||||||
As discussed in Chapter~\ref{f:3d}, most 3D scenes consists in two main types of data: geometry and textures.
|
As discussed in Chapter~\ref{f:3d}, most 3D scenes consists in two main types of data: geometry and textures.
|
||||||
|
@ -155,17 +184,25 @@ Balancing between streaming of geometry and texture data is addressed by~\citep{
|
||||||
Their approaches combine the distortion caused by having lower resolution meshes and textures into a single view independent metric.
|
Their approaches combine the distortion caused by having lower resolution meshes and textures into a single view independent metric.
|
||||||
\citep{progressive-compression-textured-meshes} also deals with the geometry / texture compromise.
|
\citep{progressive-compression-textured-meshes} also deals with the geometry / texture compromise.
|
||||||
This work designs a cost driven framework for 3D data compression, both in terms of geometry and textures.
|
This work designs a cost driven framework for 3D data compression, both in terms of geometry and textures.
|
||||||
Te authors generate an atlas for textures that enables efficient compression and multiresolution scheme.
|
The authors generate an atlas for textures that enables efficient compression and multiresolution scheme.
|
||||||
All four works considered a single, manifold textured mesh model with progressive meshes, and are not applicable in our work since we deal with large and potentially non-manifold scenes.
|
All four works considered a single mesh, and have constraints on the types of meshes that they are able to compress.
|
||||||
|
Since the 3D scenes we are interested in in our work consists in a soup of textured polygon, those constraints are not satisfied and we cannot use those techniques.
|
||||||
|
% All four works considered a single, manifold textured mesh model with progressive meshes, and are not applicable in our work since we deal with large and potentially non-manifold scenes.
|
||||||
|
|
||||||
Regarding texture streaming, \citep{simon2019streaming} propose a way to stream a set of textures by encoding them into a video.
|
|
||||||
Each texture is segmented into tiles of a fixed size.
|
|
||||||
Those tiles are then ordered to minimise dissimilarities between consecutive tiles, and encoded as a video.
|
|
||||||
By benefiting from the video compression techniques, the authors are able to reach a better rate-distortion ratio than webp, which is the new standard for texture transmission, and jpeg.
|
|
||||||
However, the geometry / texture compromise is not the point of that paper.
|
|
||||||
|
|
||||||
This thesis proposes a scalable streaming framework for large textured 3D scenes based on DASH, like~\citep{zampoglou}, but featuring a space partitioning of scenes in order to provide viewpoint dependent streaming.
|
This thesis proposes a scalable streaming framework for large textured 3D scenes based on DASH, like~\citep{zampoglou}, but featuring a space partitioning of scenes in order to provide viewpoint dependent streaming.
|
||||||
|
|
||||||
|
\subsection{Streaming in game engines}
|
||||||
|
|
||||||
|
In traditional video games, including online games, there is no requirement for 3D data streaming.
|
||||||
|
They either come with a physical support (CD, DVD, Blu-Ray) or they require the downloading of the game itself, which includes the 3D data, before letting the user play.
|
||||||
|
However, transferring data from the disk to the memory is already a form of streaming.
|
||||||
|
This is why optimized engines for video games use techniques that are reused for streaming such as level of details, to reduce the details of objects far away for the point of view and save the resources to enhance the level of detail of closer objects.
|
||||||
|
|
||||||
|
Some other online games, such as \href{https://secondlife.com}{Second Life}, rely on data generated by users, and thus are forced to send data from users to others.
|
||||||
|
In such scenarios, 3D streaming is appropriate and this is why the idea of streaming 3D content for video games has been investigated.
|
||||||
|
For example, \citep{game-on-demand} proposes an online game engine based on geometry streaming, that addresses the challenge of streaming 3D content at the same time as synchronisation of the different players.
|
||||||
|
|
||||||
% \subsection{Prefetching in NVE}
|
% \subsection{Prefetching in NVE}
|
||||||
% The general prefetching problem can be described as follows: what are the data most likely to be accessed by the user in the near future, and in what order do we download the data?
|
% The general prefetching problem can be described as follows: what are the data most likely to be accessed by the user in the near future, and in what order do we download the data?
|
||||||
%
|
%
|
||||||
|
|