121 lines
8.9 KiB
TeX
121 lines
8.9 KiB
TeX
\fresh{}
|
|
\section{3D Streaming}
|
|
|
|
\subsection{Compression and structuring}
|
|
|
|
The most popular compression model for 3D is progressive meshes: they were introduced by~\citet{progressive-meshes} and allow transmitting a mesh by sending a low resolution mesh first, called \emph{base mesh}, and then transmitting detail information that a client can use to increase the resolution.
|
|
To do so, an algorithm, called \emph{decimation algorithm} removes vertices and faces by merging vertices (Figure~\ref{sote:progressive-scheme}).
|
|
|
|
\begin{figure}[ht]
|
|
\centering
|
|
\begin{tikzpicture}[scale=2]
|
|
\node (Top1) at (0.5, 1) {};
|
|
\node (A) at (0, 0.8) {};
|
|
\node (B) at (1, 0.9) {};
|
|
\node (C) at (1.2, 0) {};
|
|
\node (D) at (0.9, -0.8) {};
|
|
\node (E) at (0.2, -0.9) {};
|
|
\node (F) at (-0.2, 0) {};
|
|
\node (G) at (0.5, 0.5) {};
|
|
\node (H) at (0.6, -0.5) {};
|
|
\node (Bottom1) at (0.5, -1) {};
|
|
|
|
\node (Top2) at (3.5, 1) {};
|
|
\node (A2) at (3, 0.8) {};
|
|
\node (B2) at (4, 0.9) {};
|
|
\node (C2) at (4.2, 0) {};
|
|
\node (D2) at (3.9, -0.8) {};
|
|
\node (E2) at (3.2, -0.9) {};
|
|
\node (F2) at (2.8, 0) {};
|
|
\node (G2) at (3.55, 0) {};
|
|
\node (Bottom2) at (3.5, -1) {};
|
|
|
|
\draw (A.center) -- (B.center) -- (C.center) -- (D.center) -- (E.center) -- (F.center) -- (A.center);
|
|
\draw (A.center) -- (G.center);
|
|
\draw (B.center) -- (G.center);
|
|
\draw (C.center) -- (G.center);
|
|
\draw (F.center) -- (G.center);
|
|
\draw (C.center) -- (H.center);
|
|
\draw (F.center) -- (H.center);
|
|
\draw (E.center) -- (H.center);
|
|
\draw (D.center) -- (H.center);
|
|
\draw[color=red, line width=1mm] (G.center) -- (H.center);
|
|
|
|
\draw (A2.center) -- (B2.center) -- (C2.center) -- (D2.center) -- (E2.center) -- (F2.center) -- (A2.center);
|
|
\draw (A2.center) -- (G2.center);
|
|
\draw (B2.center) -- (G2.center);
|
|
\draw (C2.center) -- (G2.center);
|
|
\draw (F2.center) -- (G2.center);
|
|
\draw (E2.center) -- (G2.center);
|
|
\draw (D2.center) -- (G2.center);
|
|
\node at (G2) [circle,fill=red,inner sep=2pt]{};
|
|
|
|
\draw[-{Latex[length=3mm]}] (Top1) to [out=30, in=150] (Top2);
|
|
\draw[-{Latex[length=3mm]}] (Bottom2) to [out=-150, in=-30] (Bottom1);
|
|
|
|
\node at (2, 1.75) {Edge collapse};
|
|
\node at (2, -1.75) {Vertex split};
|
|
|
|
|
|
\end{tikzpicture}
|
|
\caption{Vertex split and edge collapse\label{sote:progressive-scheme}}
|
|
\end{figure}
|
|
|
|
Every time two vertices are merged, vertices and faces are removed from the original mesh, and the resolution of the model decreases a little.
|
|
When the model is light enough, it is encoded as is, and the operations needed to recover the initial resolution of the model are encoded as well.
|
|
Thus, a client can start by downloading the low resolution model, display it to the user, and keep downloading and displaying details as time goes by.
|
|
This process reduces the time a user has to wait before seeing something, and increases the quality of experience.
|
|
|
|
More recently, to answer the need for a standard format for 3D data, the Khronos group has proposed a generic format called glTF (GL Transmission Format,~\cite{gltf}) to handle all types of 3D content representations: point clouds, meshes, animated model, etc.
|
|
glTF is based on a JSON file, which encodes the structure of a scene of 3D objects.
|
|
It can contain a scene tree with cameras, meshes, buffers, materials, textures, animations an skinning information.
|
|
Although relevant for compression, transmission and in particular streaming, this standard does not yet consider view-dependent streaming which is required for large scene remote visualisation.
|
|
|
|
3D Tiles (\cite{3d-tiles}) is a specification for visualizing massive 3D geospatial data developed by Cesium and built on glTF\@.
|
|
Their main goal is to display 3D objects on top of regular maps, and the data they use is quite different from ours: while they have nice and regular polygons with all the semantic they need, we only work on a polygon soup with textures.
|
|
Their use case is also different from ours, while their interface allows a user to have a top vision of a city, we want our users to move inside a city.
|
|
|
|
\copied{}
|
|
\subsection{Prefetching in NVE}
|
|
The general prefetching problem can be described as follows: what are the data most likely to be accessed by the user in the near future, and in what order do we download the data?
|
|
|
|
The simplest answer to the first question assumes that the user would likely access content close to the current position, thus would retrieve the 3D content within a given radius of the user (also known as the \textit{area of interest}, or AoI).
|
|
This approach, implemented in Second Life and several other NVEs (e.g.,~\cite{peer-texture-streaming}), only depends on the location of the avatar, not on its viewing direction.
|
|
It exploits spatial locality and works well for any continuous movement of the user, including turning.
|
|
Once the set of objects that are likely to be accessed by the user is determined, the next question is in what order should these objects be retrieved.
|
|
A simple approach is to retrieve the objects based on distance: the spatial distance from the user's virtual location and rotational distance from the user's view.
|
|
|
|
Other approaches consider the movement of the user and attempt to predict where the user will move to in the future.
|
|
\cite{motion-prediction} and~\cite{walkthrough-ve} predict the direction of movement from the user's mouse input pattern.
|
|
The predicted mouse movement direction is then mapped to the navigation path in the NVE\@.
|
|
Objects that fall in the predicted path are then prefetched.
|
|
CyberWalk~\cite{cyberwalk} uses an exponentially weighted moving average of past movement vectors, adjusted with the residual of prediction, to predict the next location of the user.
|
|
|
|
\cite{prefetching-walkthrough-latency} cluster the navigation paths of users and use them to predict the future navigation paths.
|
|
Objects that fall within the predicted navigation path are prefetched.
|
|
All these approaches work well for a navigation path that is continuous --- once the user clicks on a bookmark and jumps to a new location, the path is no longer continuous and the prediction becomes wrong.
|
|
|
|
Moving beyond ordering objects to prefetch based on distance only,~\cite{caching-prefetching-dve} propose to predict the user's interest in an object as well.
|
|
Objects within AoI are then retrieved in decreasing order of predicted interest value to the user.
|
|
|
|
\cite{learning-user-access-patterns} investigates how to render large-scale 3-D scenes on a thin client.
|
|
Efficient scene prefetching to provide timely data with a limited cache is one of the most critical issues for remote 3-D data scheduling in networked virtual environment applications.
|
|
Existing prefetching schemes predict the future positions of each individual user based on user traces.
|
|
In this paper, we investigate scene content sequences accessed by various users instead of user viewpoint traces and propose a user access pattern-based 3-D scene prefetching scheme.
|
|
We make a relationship graph-based clustering to partition history user access sequences into several clusters and choose representative sequences from among these clusters as user access patterns.
|
|
Then, these user access patterns are prioritized by their popularity and users' personal preference.
|
|
Based on these access patterns, the proposed prefetching scheme predicts the scene contents that will most likely be visited in the future and delivers them to the client in advance.
|
|
|
|
\cite{remote-rendering-streaming} investigate remote image-based rendering (IBR) as the most suitable solution for rendering complex 3D scenes on mobile devices, where the server renders the 3D scene and streams the rendered images to the client.
|
|
However, sending a large number of images is inefficient due to the possible limitations of wireless connections.
|
|
They propose a prefetching scheme at the server side that predicts client movements and hence prefetches the corresponding images.
|
|
|
|
Prefetching techniques easing 3D data streaming and real-time rendering for remote walkthroughs are considered in~\cite{prefetching-remote-walkthroughs}.
|
|
Culling methods, that don't possess frame to frame coherence, can successfully be combined with remote scene databases, if the prefetching algorithm is adapted accordingly.
|
|
We present a quantitative transmission policy, that takes the limited bandwidth of the network and the limited memory available at the client computer into account.
|
|
|
|
Also in the context remote visualization,~\cite{cache-remote-visualization} study caching and prefetching and optimize configurations of remote visualization architectures.
|
|
They aim at minimizing the fetch time in a remote visualization system and defend a practical infrastructure software to adaptively optimize the caching architecture of such systems under varying conditions (e.g.\ when network ressources vary).
|
|
|
|
|