Update
This commit is contained in:
@@ -14,6 +14,18 @@
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
Dynamic Adaptive Streaming over HTTP (DASH) is now a widely deployed standard for video streaming, and even though video streaming and 3D streaming are different problems, many of DASH features can inspire us for 3D streaming.
|
||||
In this chapter, we present the most important contribution of this thesis: adapting DASH to 3D streaming.
|
||||
|
||||
We start by showing how we prepare 3D data into a format that complies with DASH and that stores enough metadata to enable a client to perform efficient streaming: we partition the scene into a $k$-d tree and we further segment each cell into chunks with a fixed number of faces, which are sorted by area so that faces of a different level of detail are not grouped together.
|
||||
We also export each texture at different resolution, and we encode all the acquired metadata into a 3D version of the Media Presentation Description (MPD) that DASH uses for video.
|
||||
Namely, we store in the metadata the coordinates of the cells of the $k$-d tree, the areas of geometry chunks, and the average colors of textures.
|
||||
|
||||
We then propose a standard client that can perform frustum culling to eliminate cells outside the viewing volume of the camera (as shown in Figure~\ref{d3:big-picture}), and we define a few utility metrics to give scores to each chunk of data, and a few streaming policies that rely on those utilities to determine which chunks need to be downloaded.
|
||||
We finally evaluate all those parameters under different bandwidth setups and compare our streaming policies.
|
||||
|
||||
\newpage
|
||||
|
||||
\input{dash-3d/introduction}
|
||||
\resetstyle{}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user