monday morning commit
This commit is contained in:
parent
461bcf14eb
commit
9f73fde350
|
@ -0,0 +1,22 @@
|
||||||
|
\section{Conclusion\label{d3:conclusion}}
|
||||||
|
|
||||||
|
\copied{}
|
||||||
|
Our work in this chapter started with the question: can DASH be used for NVE\@?
|
||||||
|
The answer is \textit{yes}.
|
||||||
|
In answering this question, we contributed by showing how to organize a polygon soup and its textures into a DASH-compliant format that (i) includes a minimal amount of metadata that is useful for the client, (ii) organizes the data to allow the client to get the most useful content first.
|
||||||
|
We further show that these metadata that is precomputed offline is sufficient to design and build a DASH client that is adaptive --- it can selectively download segments within its view, make intelligent decisions about what to download, balancing between geometry and texture while being adaptive to network bandwidth.
|
||||||
|
\fresh{}
|
||||||
|
Exploiting DASH's concepts to design 3D streaming systems allow us to tackle some of the issues that were raised in the previous chapter.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \textbf{It has built-in support for materials and textures}: we use a DASH adaptation set for the materials, and the average color of textures are given in the MPD, meaning that a client is not forced to render everything in white while not having the texture for the materials.
|
||||||
|
\item \textbf{It doesn't require any computation on the server side}: the only computation required is preprocessing the model and creating metadata to allow a client make smart decisions, once those precomputations are done, the artifacts can be deployed to a static server like Apache or nginx and all the computation lod is deported to the client, making this solution scalable.
|
||||||
|
\item \textbf{It has support for multi-resolution}: in our implementation, we use multi-resolution textures, and even though multi-resolution geometry is not implemented yet, the challenge here lies more on the compression side than on the streaming side. Once a portion of geometry is encoded into different levels of details, we just have to create representations and segments for those levels and define their corresponding utility.
|
||||||
|
\item \todo{we didn't talk about performance, even though we have a few things to say about this in the client and right here}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
However, the work described in this chapter does not take any quality of experience aspects into account.
|
||||||
|
We designed a 3D streaming system, but did not consider interaction at all, even though we acknowledged it is a critical aspect for 3D streaming in Chapter~\ref{bi}.
|
||||||
|
|
||||||
|
|
||||||
|
% We believe our proposed DASH for NVE is flexible enough for the community to start the simplicity and ease of deployment of DASH for NVE and to start investigating different streaming strategies to improve the quality of experience of NVE users.
|
|
@ -1,2 +1,12 @@
|
||||||
DASH is made to be format agnositic, and even though it is almost only applied for video streaming nowadays, we believe it is still suitable for 3D streaming.
|
\section{Introduction}
|
||||||
Even though periods are not much of a use in the case of a scene that doesn't evolve as time goes by, but adaptation sets allow us to separate our content between geometry and textures, and gives answers to the questions that were addresed in the conclusion of the previous chapter.
|
|
||||||
|
In Section~\ref{i:video-vs-3d}, we presented the similarities and differences between video and 3D.
|
||||||
|
We higlighted the fact that knowledge about video streaming is helpful to design a 3D streaming system.
|
||||||
|
We also presented the main concepts of DASH (Dynamic Adaptive Streaming of HTTP) in Section~\ref{sote:dash}.
|
||||||
|
DASH is made to be content agnostic, and even though it is almost only applied for video streaming nowadays, we believe it is still suitable for 3D streaming.
|
||||||
|
In this chapter, we show our work on adapting DASH for 3D streaming.
|
||||||
|
Section~\ref{d3:dash-3d} describes our content preparation, and all the preprocessing that is done to our model to allow efficient streaming.
|
||||||
|
Section~\ref{d3:dash-client} gives possible implementations of clients that exploit the content structure.
|
||||||
|
Section~\ref{d3:evaluation} evaluates the impact of the different parameters that appear both in the content preparation and the clients.
|
||||||
|
Finally, Section~\ref{d3:conclusion} sums up our work and explains how it tackles the challenges raised in the conclusion of the previous chapter.
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
\chapter{DASH-3D}
|
\chapter{DASH-3D\label{d3}}
|
||||||
|
|
||||||
\minitoc{}
|
\minitoc{}
|
||||||
\newpage
|
\newpage
|
||||||
|
@ -26,3 +26,5 @@
|
||||||
\input{dash-3d/evaluation}
|
\input{dash-3d/evaluation}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
||||||
|
\input{dash-3d/conclusion}
|
||||||
|
\resetstyle{}
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
\chapter{Introduction}
|
\chapter{Introduction\label{i}}
|
||||||
|
|
||||||
\input{introduction/video-vs-3d}
|
\input{introduction/video-vs-3d}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
\fresh{}
|
\fresh{}
|
||||||
|
|
||||||
\section{Similarities and differences between video and 3D}
|
\section{Similarities and differences between video and 3D\label{i:video-vs-3d}}
|
||||||
|
|
||||||
Despite what one may think, the video streaming scenario and the 3D streaming one share many similarities: at a higher level of abstraction, they are both systems that allow a user to access remote content without having to wait until everything is loaded.
|
Despite what one may think, the video streaming scenario and the 3D streaming one share many similarities: at a higher level of abstraction, they are both systems that allow a user to access remote content without having to wait until everything is loaded.
|
||||||
Analyzing the similarities and the differences between the video and the 3D scenarios as well as having knowledge about video streaming litterature is\todo{is key or are key?} key to developing an efficient 3D streaming system.
|
Analyzing the similarities and the differences between the video and the 3D scenarios as well as having knowledge about video streaming litterature is\todo{is key or are key?} key to developing an efficient 3D streaming system.
|
||||||
|
|
|
@ -1,11 +1,11 @@
|
||||||
\input{introduction/main}
|
\input{introduction/main}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
||||||
|
\mainmatter{}
|
||||||
|
|
||||||
\input{state-of-the-art/main}
|
\input{state-of-the-art/main}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
||||||
\mainmatter{}
|
|
||||||
|
|
||||||
\input{preliminary-work/main}
|
\input{preliminary-work/main}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,7 @@ We now describe an experiment that we conducted on 51 participants, with two goa
|
||||||
First, we want to measure the impact of 3D bookmarks on navigation within an NVE\@.
|
First, we want to measure the impact of 3D bookmarks on navigation within an NVE\@.
|
||||||
Second, we want to collect traces from the users so that we can replay them for reproducible experiments for comparing streaming strategies in Section 4.
|
Second, we want to collect traces from the users so that we can replay them for reproducible experiments for comparing streaming strategies in Section 4.
|
||||||
|
|
||||||
\subsection{Our NVE}
|
\subsection{Our NVE\ref{bi:our-nve}}
|
||||||
To ease the deployment of our experiments to users in distributed locations on a crowdsourcing platform, we implement a simple Web-based NVE client using THREE.js\footnote{http://threejs.org}.
|
To ease the deployment of our experiments to users in distributed locations on a crowdsourcing platform, we implement a simple Web-based NVE client using THREE.js\footnote{http://threejs.org}.
|
||||||
The NVE server is implemented with node.js\footnote{http://nodejs.org}.
|
The NVE server is implemented with node.js\footnote{http://nodejs.org}.
|
||||||
The NVE server streams a 3D scene to the client; the client renders the scene as the 3D content are received.
|
The NVE server streams a 3D scene to the client; the client renders the scene as the 3D content are received.
|
||||||
|
|
|
@ -16,8 +16,8 @@ However, the system described in this chapter has some drawbacks:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \textbf{It doesn't support materials and textures}: these elements are downloaded at the beginning of the interaction, and since they can have a massive size, this solution is not satisfactory for a system streaming an NVE\@.
|
\item \textbf{It doesn't support materials and textures}: these elements are downloaded at the beginning of the interaction, and since they can have a massive size, this solution is not satisfactory for a system streaming an NVE\@.
|
||||||
\item \textbf{It still requires a heavy load on the server side}: even though the server is not performing online rendering of the scene, it still has to perform frustum and backface culling to find the faces to send to the client, and it also has to keep track of what each client has already downloaded, and what remains to be downloaded.
|
\item \textbf{It still requires a heavy load on the server side}: even though the server is not performing online rendering of the scene, it still has to perform frustum and backface culling to find the faces to send to the client, and it also has to keep track of what each client has already downloaded, and what remains to be downloaded.
|
||||||
\item \textbf{The performance of the rendering has not been taken into account}: of course, a system for navigating in 3D scenes must have a sufficient framerate to guarantee a good Quality of Experience for users, and this chapter does not tackle at any point the difficulty to have many tasks to do at the same time (downloading data, uploading the OpenGL buffers, managing the user interaction, rendering the scene, etc\ldots).
|
|
||||||
\item \textbf{No multi-resolution techniques are used}: in modern 3D streaming, mutli-resolution is a must-have. It prevents the user from waiting until all the data is arrived while still having a global, lower-resolution view of the content he's trying to access.
|
\item \textbf{No multi-resolution techniques are used}: in modern 3D streaming, mutli-resolution is a must-have. It prevents the user from waiting until all the data is arrived while still having a global, lower-resolution view of the content he's trying to access.
|
||||||
|
\item \textbf{The performance of the rendering has not been taken into account}: of course, a system for navigating in 3D scenes must have a sufficient framerate to guarantee a good Quality of Experience for users, and this chapter does not tackle at any point the difficulty to have many tasks to do at the same time (downloading data, uploading the OpenGL buffers, managing the user interaction, rendering the scene, etc\ldots).
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
After learning these lessons, we show, in the next chapter, what is possible to do in order to alleviate these issues.
|
After learning these lessons, we show, in the next chapter, what is possible to do in order to alleviate these issues.
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
\chapter{Preliminary work}
|
\chapter{Preliminary work\label{bi}}
|
||||||
|
|
||||||
\minitoc{}
|
\minitoc{}
|
||||||
\newpage
|
\newpage
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
\fresh{}
|
\fresh{}
|
||||||
|
|
||||||
\section{Video}
|
\section{Video\label{sote:vide}}
|
||||||
|
|
||||||
\subsection{DASH\@: the standard for video streaming}
|
\subsection{DASH\@: the standard for video streaming\label{sote:dash}}
|
||||||
|
|
||||||
\copied{}
|
\copied{}
|
||||||
Dynamic Adaptive Streaming over HTTP (DASH), or MPEG-DASH~\cite{dash-std,dash-std-2}, is now a widely deployed
|
Dynamic Adaptive Streaming over HTTP (DASH), or MPEG-DASH~\cite{dash-std,dash-std-2}, is now a widely deployed
|
||||||
|
|
|
@ -1,3 +1,32 @@
|
||||||
|
\fresh{}
|
||||||
|
|
||||||
|
\section{Desktop and mobile interactions}
|
||||||
|
|
||||||
|
\subsection{Desktop interaction}
|
||||||
|
|
||||||
|
Regardind desktop interaction, we keep the interaction we described in Section~\ref{bi:our-nve}, namely:
|
||||||
|
\begin{itemize}
|
||||||
|
\item W, A, S and D keys to translate the camera;
|
||||||
|
\item mouse motions to rotate the camera.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\subsection{Mobile interaction}
|
||||||
|
|
||||||
|
\copied{}
|
||||||
|
Mobile interactions are more complex because the user does not have neither keyboard nor mouse to interact with.
|
||||||
|
However, there are some other sensors on most mobile devices that can help interaction.
|
||||||
|
The most useful sensor for 3D interaction on mobile devices is definitely the gyroscope.
|
||||||
|
We use the gyroscope to enable a user to turn his device to turn the virtual camera.
|
||||||
|
We also add the possibility to turn the camera by drag and dropping the scene.
|
||||||
|
This way, the user is not forced to perform a real-world half-turn to be able to look behind or to keep its device pointing to the sky (which can quickly become tiring) to look up.
|
||||||
|
These interactions, however, do not allow the user to move the camera: he can rotate it but not translate it.
|
||||||
|
For this reason, we display a small joystick on the bottom left corner of the screen that mimics the first person video games interactions and allow the user translating the camera:
|
||||||
|
\begin{itemize}
|
||||||
|
\item moving the joystick up makes the camera move forward;
|
||||||
|
\item moving the joystick down makes the camera move backwards;
|
||||||
|
\item moving the joystick sideways makes the camera move sidewars.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
\copied{}
|
\copied{}
|
||||||
\section{Adding bookmarks into DASH NVE framework\label{sb:bookmarks}}
|
\section{Adding bookmarks into DASH NVE framework\label{sb:bookmarks}}
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,13 @@
|
||||||
|
\fresh{}
|
||||||
|
|
||||||
|
\section{Introduction}
|
||||||
|
|
||||||
|
In Chapter~\ref{bi}, we described how it is possible to modify a user interface to ease user nagivation in a 3D scene, and how the system can exploit it.
|
||||||
|
In Chapter~\ref{d3}, we presented a streaming system that does not take the interface or the user interaction into account at all.
|
||||||
|
Hence, it seems natural to us to try to bring back the user interaction into DASH-3D.
|
||||||
|
In order to do so, we have chosen two angles of attack:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we design an interface allowing to navigate in a 3D scene for both desktop and mobile devices;
|
||||||
|
\item we improve and adapt the bookmarks described in Chapter~\ref{bi} to the context of DASH-3D and to mobile interaction.
|
||||||
|
\end{itemize}
|
|
@ -1,8 +1,11 @@
|
||||||
\chapter{System bookmarks}
|
\chapter{Mobile interaction and system bookmarks}
|
||||||
|
|
||||||
\minitoc{}
|
\minitoc{}
|
||||||
\newpage
|
\newpage
|
||||||
|
|
||||||
|
\input{system-bookmarks/introduction}
|
||||||
|
\resetstyle{}
|
||||||
|
|
||||||
\input{system-bookmarks/bookmark}
|
\input{system-bookmarks/bookmark}
|
||||||
\resetstyle{}
|
\resetstyle{}
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue