Starting to work on implementation

This commit is contained in:
Thomas Forgione 2019-09-19 17:30:46 +02:00
parent f6a630f2b6
commit d099c78966
No known key found for this signature in database
GPG Key ID: 203DAEA747F48F41
7 changed files with 215 additions and 127 deletions

View File

@ -0,0 +1,87 @@
\fresh{}
\section{Introduction}
In the previous chapter, we discussed the theoritical aspects of 3D streaming based on DASH\@.
We showed different ways of structuring and downloading content, and we evaluated the parameters.
In this chapter, we detail every aspect of the implementation of the DASH-3D client, from the way segments are downloaded to how they are rendered.
All DASH clients are built from the same basic bricks, as shown in Figure~\ref{d3i:dash-scheme}:
\begin{itemize}
\item the \emph{access client}, which is the part that deals with making requests and receiving responses;
\item the \emph{segment parser}, which decodes the data downloaded by the access client;
\item the \emph{control engine}, which analyses the bandwidth to dynamically adapt to it;
\item the \emph{media engine}, which renders the multimedia content to the screen and the user interface.
\end{itemize}
In order to be able to do user study easily, we want our client to be as portable as possible, so we decided to implement it in JavaScript using WebGL for rendering.
That way, we can develop a desktop interface and adapting it for mobile devices, for example, will be painless.
\tikzset{
double arrow/.style args={#1 colored by #2 and #3}{
-stealth,line width=#1,#2, % first arrow
postaction={draw,-stealth,#3,line width=(#1)/3,
shorten <=(#1)/3,shorten >=2*(#1)/3}, % second arrow
}
}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
% Server
\draw[rounded corners=5pt,fill=Pink] (-10, 0) rectangle (-3, 7.5);
\node at (-9, 7) {Server};
% Segments
\begin{scope}[shift={(0.5,0.5)}]
\foreach \x in {0,...,3}
{
\draw [fill=Bisque](\x/2-7.5, 1.5-\x/2) rectangle (\x/2-5.5, 6-\x/2);
\node at (\x/2-6.5, 5.5-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
\node at (\x/2-6.5, 4.75-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
\draw [fill=LightBlue] (\x/2-6.5, 3.825-\x/2) circle (2pt) {};
\draw [fill=LightBlue] (\x/2-6.5, 3.325 -\x/2) circle (2pt) {};
\draw [fill=LightBlue] (\x/2-6.5, 2.825 -\x/2) circle (2pt) {};
\node at (\x/2-6.5, 2-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
}
\end{scope}
% MPD
\draw[fill=LightBlue] (-9.5, 6.5) rectangle (-7.5, 0.5);
\node at(-8.5, 3.5) {MPD};
% Client
\draw[rounded corners=5pt, fill=LemonChiffon] (-2, 0) rectangle (3, 7.5);
\node at (-0.5, 7) {DASH client};
% Access client
\draw[fill=PaleGreen] (-1.5, 0.5) rectangle (2.5, 1.5);
\node at (0.5, 1) {Access Client};
% Media engine
\draw[fill=PaleGreen] (-1.5, 5.5) rectangle (2.5, 6.5);
\node at (0.5, 6) {Media Engine};
% Control engine
\draw[fill=PaleGreen] (-1.5, 2) rectangle (0.25, 5);
\node[align=center] at (-0.625, 3.5) {Control \\ Engine};
% Segment parser
\draw[fill=PaleGreen] (0.75, 2) rectangle (2.5, 5);
\node[align=center] at (1.625, 3.5) {Segment \\ Parser};
% Access client to server
\draw[double arrow=5pt colored by black and white] (-3.25, 1.0) -- (-1.0, 1.0);
% Access client to control engine
\draw[double arrow=5pt colored by black and white] (-0.625, 1.25) -- (-0.625, 2.5);
% Acces client to segment parser
\draw[double arrow=5pt colored by black and white] (1.625, 1.25) -- (1.625, 2.5);
% Segment parser to media engine
\draw[double arrow=5pt colored by black and white] (1.625, 4.5) -- (1.625, 5.75);
\end{tikzpicture}
\caption{Scheme of a server and a DASH client\label{d3i:dash-scheme}}
\end{figure}

View File

@ -0,0 +1,10 @@
\chapter{DASH-3D implementation}
\input{dash-3d-implementation/introduction}
\resetstyle{}
\input{dash-3d-implementation/worker}
\resetstyle{}
\input{dash-3d-implementation/media-engine}
\resetstyle{}

View File

@ -0,0 +1,68 @@
\fresh{}
\section{Media engine}
In order to conduct user study of 3D streaming systems on mobile devices, we must ensure that our system is running at a reasonable framerate.
\subsection{Restructuration of the 3D content on the client}
To sum up the bottlenecks of performance described in Section~\ref{i:rendering}, we need to keep the least amount of objects possible, to do the least amount of \texttt{glDrawArray}, but we want to have separated objects so that frustum culling is efficient.
In our NVE, there are more than a thousand materials and they are reused all over the scene.
Creating local objects to benefit from frustum culling is not an option in our case, so we merge all faces that share material and draw them in a single \texttt{glDrawArray} call.
In order to do so, the content that was previously structured in a way to optimize streaming is restructured live on the client to optimize rendering, as show in Figure~\ref{d3:render-structure}.
That way, when geometry is downloaded for a new material, we know how much space we should allocate for the buffers, and when geometry is downloaded for a material already allocated, we just edit the buffer and transmit it to the GPU\@.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node[align=center] at(1.5, -1) {DASH-3D\\Structure};
\node at(-1.5, -3) {\texttt{seg0.obj}};
\draw[fill=Pink] (0, -2) rectangle (3, -3);
\node at(1.5, -2.5) {Material 1};
\draw[fill=PaleGreen] (0, -3) rectangle (3, -4);
\node at(1.5, -3.5) {Material 2};
\node at(-1.5, -6) {\texttt{seg1.obj}};
\draw[fill=Pink] (0, -5) rectangle (3, -7);
\node at(1.5, -6) {Material 1};
\node at(-1.5, -9) {\texttt{seg2.obj}};
\draw[fill=PaleGreen] (0, -8) rectangle (3, -9);
\node at(1.5, -8.5) {Material 2};
\draw[fill=LightBlue] (0, -9) rectangle (3, -10);
\node at(1.5, -9.5) {Material 3};
\node[align=center] at (7.5, -1) {Renderer\\Structure};
\node at(10.5, -3.5) {Object 1};
\draw[fill=Pink] (6, -2) rectangle (9, -5);
\node at(7.5, -3.5) {Material 1};
\node at(10.5, -7) {Object 2};
\draw[fill=PaleGreen] (6, -6) rectangle (9, -8);
\node at(7.5, -7) {Material 2};
\node at(10.5, -9.5) {Object 3};
\draw[fill=LightBlue] (6, -9) rectangle (9, -10);
\node at(7.5, -9.5) {Material 3};
\node[minimum width=3cm,minimum height=2cm] (O1) at (7.5, -3.5) {};
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
\node[minimum width=3cm,minimum height=2cm] (O2) at (7.5, -7) {};
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
\node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
\draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
\node at (1.5, -10.75) {$\vdots$};
\node at (7.5, -10.75) {$\vdots$};
\end{tikzpicture}
\caption{Restructuration of the content on the renderer\label{d3:render-structure}}
\end{figure}

View File

@ -0,0 +1,46 @@
\fresh{}
\section{Worker}
In JavaScript, there is no way of doing parallel computing without using \emph{web workers}.
A web worker is a script in JavaScript that runs in the background, on a separate thread and that can communicate with the main script by sending and receiving messages.
Since our system has many tasks to do, it seems natural to use workers to manage the streaming without impacting the framerate of the renderer.
However, what a worker can do is very limited, since it cannot access the variables of the main script.
Because of this, we are forced to run the renderer on the main script, where it can access the HTML page, and we move all the other tasks to the worker (the access client, the control engine and the segment parsers).
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node at(0, 1) {Main script};
\draw[->, color=LightGray] (0, 0.5) -- (0, -9.5);
\node at(5, 1) {Worker};
\draw[->, color=LightGray] (5, 0.5) -- (5, -9.5);
\node at(12.5, 1) {Server};
\draw[->, color=LightGray] (12.5, 0.5) -- (12.5, -9.5);
% MPD
\node[color=blue, above] at(2.5, 0.0) {Ask MPD};
\node[color=blue, right] at(5.0, -2.5) {Parse MPD};
\draw[color=blue, ->] (0, 0) -- (5, -0.1) -- (12.5, -1) -- (5, -2) -- (5, -2.9) -- (0, -3);
% Ask segments
\begin{scope}[shift={(0, -3)}]
\node[color=DarkGreen, above] at(2.5, 0.0) {Ask segment};
\node[color=DarkGreen, right] at(5.0, -2.5) {Parse segment};
\draw[color=DarkGreen, ->] (0, 0) -- (5, -0.1) -- (12.5, -1) -- (5, -2) -- (5, -2.9) -- (0, -3);
\end{scope}
% Ask more segments
\begin{scope}[shift={(0, -6)}]
\node[color=red, above] at(2.5, 0.0) {Ask segment};
\node[color=red, right] at(5.0, -2.5) {Parse segment};
\draw[color=red, ->] (0, 0) -- (5, -0.1) -- (12.5, -1) -- (5, -2) -- (5, -2.9) -- (0, -3);
\end{scope}
\end{tikzpicture}
\caption{Repartition of the tasks on the main script and the worker}
\end{figure}

View File

@ -131,68 +131,3 @@ s^{\texttt{GREEDY}}_i= \argmax{s \in \mathcal{S} \backslash \mathcal{B}_i \cap \
\label{d3:greedy} \label{d3:greedy}
\end{equation} \end{equation}
\fresh{}
\subsection{Performance}
Another important aspect of our client is performance.
To sum up the bottlenecks of performance described in Section~\ref{i:rendering}, we need to keep the least amount of objects possible, to do the least amount of \texttt{glDrawArray}, but we want to have separated objects so that frustum culling is efficient.
In our NVE, there are more than a thousand materials and they are reused all over the scene.
Creating local objects to benefit from frustum culling is not an option in our case, so we merge all faces that share material and draw them in a single \texttt{glDrawArray} call.
In order to do so, the content that was previously structured in a way to optimize streaming is restructured live on the client to optimize rendering, as show in Figure~\ref{d3:render-structure}.
That way, when geometry is downloaded for a new material, we know how much space we should allocate for the buffers, and when geometry is downloaded for a material already allocated, we just edit the buffer and transmit it to the GPU\@.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node[align=center] at(1.5, -1) {DASH-3D\\Structure};
\node at(-1.5, -3) {\texttt{seg0.obj}};
\draw[fill=LightCoral] (0, -2) rectangle (3, -3);
\node at(1.5, -2.5) {Material 1};
\draw[fill=LightGreen] (0, -3) rectangle (3, -4);
\node at(1.5, -3.5) {Material 2};
\node at(-1.5, -6) {\texttt{seg1.obj}};
\draw[fill=LightCoral] (0, -5) rectangle (3, -7);
\node at(1.5, -6) {Material 1};
\node at(-1.5, -9) {\texttt{seg2.obj}};
\draw[fill=LightGreen] (0, -8) rectangle (3, -9);
\node at(1.5, -8.5) {Material 2};
\draw[fill=Lavender] (0, -9) rectangle (3, -10);
\node at(1.5, -9.5) {Material 3};
\node[align=center] at (7.5, -1) {Renderer\\Structure};
\node at(10.5, -3.5) {Object 1};
\draw[fill=LightCoral] (6, -2) rectangle (9, -5);
\node at(7.5, -3.5) {Material 1};
\node at(10.5, -7) {Object 2};
\draw[fill=LightGreen] (6, -6) rectangle (9, -8);
\node at(7.5, -7) {Material 2};
\node at(10.5, -9.5) {Object 3};
\draw[fill=Lavender] (6, -9) rectangle (9, -10);
\node at(7.5, -9.5) {Material 3};
\node[minimum width=3cm,minimum height=2cm] (O1) at (7.5, -3.5) {};
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
\node[minimum width=3cm,minimum height=2cm] (O2) at (7.5, -7) {};
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
\node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
\draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
\node at (1.5, -10.75) {$\vdots$};
\node at (7.5, -10.75) {$\vdots$};
\end{tikzpicture}
\caption{Restructuration of the content on the renderer\label{d3:render-structure}}
\end{figure}

View File

@ -1,62 +0,0 @@
\section{What is a 3D model?}
Before talking about 3D streaming, we need to define what is a 3D model and how it is rendered.
\subsection{Content of a 3D model}
A 3D model consists in 3D points (that are called \emph{vertices}), texture coordinates, nomals, faces, materials and textures.
The Wavefront OBJ is probably the best to give an introduction to 3D models since it describes all these elements.
A 3D model encoded in the OBJ format typically consists in two files: the materials file (\texttt{.mtl}) and the object file (\texttt{.obj}).
\paragraph{}
The materials file declare all the materials that the object file will reference.
Each material has a name, ambient, diffuse and specular colors, as well as texture maps.
A simple material file is visible on Listing~\ref{i:mtl}.
\paragraph{}
The object file declare the 3D content of the objects.
It declares vertices, texture coordinates and normals from coordinates (e.g.\ \texttt{v 1.0 2.0 3.0} for a vertex, \texttt{vt 1.0 2.0} for a texture coordinate, \texttt{vn 1.0 2.0 3.0} for a normal).
These elements are numbered starting from 1.
Faces are declared by using the indices of these elements. A face is a polygon with any number of vertices and can be declared in multiple manners:
\begin{itemize}
\item \texttt{f 1 2 3} defines a triangle face that joins the first, the second and the third vertex declared;
\item \texttt{f 1/1 2/3 3/4} defines a triangle similar but with texture coordinates, the first texture coordinate is associated to the first vertex, the third texture coordinate is associated to the second vertex, and the fourth texture coordinate is associated with the third vertex;
\item \texttt{f 1//1 2//3 3//4} defines a triangle similar but using normal instead of texture coordinates;
\item \texttt{f 1/1/1 2/3/3 3/4/4} defines a triangle with both texture coordinates and normals.
\end{itemize}
It can include materials from a material file (\texttt{mtllib path.mtl}) and use apply it to faces.
A material is applied by using the \texttt{usemtl} keyword, followed by the name of the material to use.
The faces declared after a \texttt{usemtl} are painted using the material in question.
An example of object file is visible on Listing~\ref{i:obj}.
\begin{figure}[th]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\lstinputlisting[
language=XML,
caption={An object file describing a cube},
label=i:obj,
]{assets/introduction/cube.obj}
\end{subfigure}\quad%
\begin{subfigure}[b]{0.4\textwidth}
\vspace{-0.5cm}
\lstinputlisting[
language=XML,
caption={A material file describing a material},
label=i:mtl,
emph={%
newmtl,
Ka,
Kd,
Ks,
map_Ka
}
]{assets/introduction/materials.mtl}
\vspace{0.2cm}
\includegraphics[width=\textwidth]{assets/introduction/cube.png}
\caption*{A render of the cube}
\end{subfigure}
\caption{The OBJ representation of a cube and its render\label{i:cube}}
\end{figure}

View File

@ -12,9 +12,13 @@
\input{dash-3d/main} \input{dash-3d/main}
\resetstyle{} \resetstyle{}
\input{dash-3d-implementation/main}
\resetstyle{}
\input{system-bookmarks/main} \input{system-bookmarks/main}
\resetstyle{} \resetstyle{}
\backmatter{} \backmatter{}
\input{conclusion/main} \input{conclusion/main}