477 lines
28 KiB
TeX
477 lines
28 KiB
TeX
\copied{}
|
|
\section{Client\label{d3:dash-client}}
|
|
|
|
In this section, we specify a DASH NVE client that exploits the preparation of the 3D content in an NVE for streaming.
|
|
|
|
The generated MPD file describes the content organization so that the client gets all the necessary information to make educated decisions and query the 3D content it needs according to the available resources and current viewpoint.
|
|
A camera path generated by a particular user is a set of viewpoint $v(t_i)$ indexed by a continuous time interval $t_i \in [t_1,t_{end}]$.
|
|
|
|
\fresh{}
|
|
All DASH clients are built from the same basic bricks, as shown in Figure~\ref{d3:dash-scheme}:
|
|
\begin{itemize}
|
|
\item the \emph{access client}, which is the module that deals with making requests and receiving responses;
|
|
\item the \emph{segment parsers}, which decodes the data downloaded by the access client, whether it be materials, geometry or textures;
|
|
\item the \emph{control engine}, which analyses the bandwidth to dynamically adapt to it;
|
|
\item the \emph{media engine}, which renders the multimedia content and the user interface to the screen.
|
|
\end{itemize}
|
|
|
|
\begin{figure}[ht]
|
|
\centering
|
|
\begin{tikzpicture}
|
|
|
|
% Server
|
|
\draw[rounded corners=5pt,fill=Pink] (-10, 0) rectangle (-3, 7.5);
|
|
\node at (-9, 7) {Server};
|
|
|
|
% Segments
|
|
\begin{scope}[shift={(0.5,0.5)}]
|
|
\foreach \x in {0,...,3}
|
|
{
|
|
\draw [fill=Bisque](\x/2-7.5, 1.5-\x/2) rectangle (\x/2-5.5, 6-\x/2);
|
|
\node at (\x/2-6.5, 5.5-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
|
|
\node at (\x/2-6.5, 4.75-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
|
|
\draw [fill=LightBlue] (\x/2-6.5, 3.825-\x/2) circle (2pt) {};
|
|
\draw [fill=LightBlue] (\x/2-6.5, 3.325 -\x/2) circle (2pt) {};
|
|
\draw [fill=LightBlue] (\x/2-6.5, 2.825 -\x/2) circle (2pt) {};
|
|
\node at (\x/2-6.5, 2-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
|
|
}
|
|
\end{scope}
|
|
|
|
% MPD
|
|
\draw[fill=LightBlue] (-9.5, 6.5) rectangle (-7.5, 0.5);
|
|
\node at(-8.5, 3.5) {MPD};
|
|
|
|
% Client
|
|
\draw[rounded corners=5pt, fill=LemonChiffon] (-2, 0) rectangle (3, 7.5);
|
|
\node at (-0.5, 7) {DASH client};
|
|
|
|
% Access client
|
|
\draw[fill=PaleGreen] (-1.5, 0.5) rectangle (2.5, 1.5);
|
|
\node at (0.5, 1) {Access Client};
|
|
|
|
% Media engine
|
|
\draw[fill=PaleGreen] (-1.5, 5.5) rectangle (2.5, 6.5);
|
|
\node at (0.5, 6) {Media Engine};
|
|
|
|
% Control engine
|
|
\draw[fill=PaleGreen] (-1.5, 2) rectangle (0.25, 5);
|
|
\node[align=center] at (-0.625, 3.5) {Control \\ Engine};
|
|
|
|
% Segment parser
|
|
\draw[fill=PaleGreen] (0.75, 2) rectangle (2.5, 5);
|
|
\node[align=center] at (1.625, 3.5) {Segment \\ Parser};
|
|
|
|
% Access client to server
|
|
\draw[double arrow=5pt colored by RoyalBlue and white] (-3.25, 1.0) -- (-1.0, 1.0);
|
|
|
|
% Access client to control engine
|
|
\draw[double ended double arrow=5pt colored by RoyalBlue and white] (-0.625, 1.25) -- (-0.625, 2.5);
|
|
|
|
% Acces client to segment parser
|
|
\draw[double arrow=5pt colored by RoyalBlue and white] (1.625, 1.25) -- (1.625, 2.5);
|
|
|
|
% Segment parser to media engine
|
|
\draw[double arrow=5pt colored by RoyalBlue and white] (1.625, 4.5) -- (1.625, 5.75);
|
|
|
|
\end{tikzpicture}
|
|
\caption{DASH client-server architecture\label{d3:dash-scheme}}
|
|
\end{figure}
|
|
|
|
\copied{}
|
|
The DASH client first downloads the MPD file to get the material (.mtl) file containing information about all the geometry and textures available for the entire 3D model.
|
|
At time instance $t_i$, the DASH client decides to download the appropriate segments containing the geometry and the texture to generate the viewpoint $v(t_{i+1})$ for the time instance $t_{i+1}$.
|
|
|
|
Starting from $t_1$, the camera continuously follows a camera path $C=\{v(t_i), t_i \in [t_1,t_{end}]\}$, along which downloading opportunities are strategically exploited to sequentially query the most useful segments.
|
|
|
|
\subsection{Segment Utility\label{d3:utility}}
|
|
|
|
Unlike video streaming, where the bitrate of each segment correlates with the quality of the video received, for 3D content, the size (in bytes) of the content does not necessarily correlate well to its contribution to visual quality.
|
|
A large polygon with huge visual impact takes the same number of bytes as a tiny polygon.
|
|
Further, the visual impact is \textit{view dependent} --- a large object that is far away or out of view does not contribute to the visual quality as much as a smaller object that is closer to the user.
|
|
As such, it is important for a DASH-based NVE client to estimate the usefulness of a given segment to download, so that it can make good decisions about what to download.
|
|
We call this usefulness the \textit{utility} of the segment.
|
|
|
|
The utility is a function of a segment, either geometry or texture, and the current viewpoint (camera location, view angle, and look-at point), and is therefore dynamically computed online by the client from parameters in the MPD file.
|
|
|
|
\subsubsection{Offline parameters}
|
|
Let us detail first, all parameters available from the offline/static preparation of the 3D NVE\@.
|
|
These parameters are stored in the MPD file.
|
|
First, for each geometry segment $s^G$ there is a predetermined 3D area $\mathcal{A}_{3D}(s^G)$, equal to the sum of all triangle areas in this segment (in 3D); it is computed as the segments are created.
|
|
Note that the texture segments have similar information, but computed at \textit{navigation time} $t_i$.
|
|
The second information stored in the MPD for all segments, geometry, and texture, is the size of the segment (in kB).
|
|
Indeed, geometry segments have close to a similar number of faces; their size is almost uniform.
|
|
For texture segments, the size is usually much smaller than the geometry segments but also varies a lot, as between two successive resolutions the number of pixels is divided by 4.
|
|
|
|
Finally, for each texture segment $s^{T}$, the MPD stores the \textit{MSE} (mean square error) of the image and resolution, relative to the highest resolution (by default, triangles are filled with its average color).
|
|
Offline parameters are stored in the MPD as shown in Listing~\ref{d3:mpd}.
|
|
|
|
\subsubsection{Online parameters}
|
|
In addition to the offline parameters stored in the MPD file for each segment, view-dependent parameters are computed at navigation time.
|
|
First, a measure of 3D area is computed for texture segments.
|
|
As a texture maps on a set of triangles, we account for the area in 3D of all these triangles.
|
|
We could consider such an offline measure (attached to the adaptation set containing the texture), but we prefer to only account for the triangles that have been already downloaded by the client.
|
|
We call the set of triangles colored by a texture $T$: $\Delta(s^T)=\Delta(T)$ (depending only on $T$ and equal for any representation/segment $s^T$ in this texture adaptation set).
|
|
At each time $t_i$, a subset of $\Delta(T)$ has been downloaded; we denote it $\Delta(T, t_i)$.
|
|
|
|
Moreover, each geometry segment belongs to a geometry adaptation set $AS^G$ whose bounding box coordinates are stored in the MPD\@.
|
|
Given the coordinates of the bounding box $\mathcal{BB}(AS^G)$ and the viewpoint $v(t_i)$ at time $t_i$, the client computes the distance $\mathcal{D}(v(t_i),AS^G)$ of the bounding box $\mathcal{BB}(AS^G)$ as the distance from the center of $\mathcal{BB}(AS^G)$ to the principal point of the camera, given in $v(t_i)$.
|
|
|
|
|
|
\subsubsection{Utility for geometry segments}
|
|
We now have all parameters to derive a utility measure of a geometry segment.
|
|
Utility for texture segments follows from the geometric utility.
|
|
|
|
The utility of a geometric segment $s^G$ for a viewpoint $v(t_i)$ is:
|
|
\begin{equation*}
|
|
\mathcal{U} \Big(s^G,v(t_i) \Big) = \frac{\mathcal{A}_{3D}(s^G)}{\mathcal{D}{\left(v{(t_i)},AS^G\right)}^2}
|
|
\end{equation*}
|
|
where $AS^G$ is the adaptation set containing $s^G$.
|
|
|
|
Basically, the utility of a segment is proportional to the area that its faces cover, and inversely proportional to the square of the distance between the camera and the center of the bounding box of the adaptation set containing the segment.
|
|
That way, we favor segments with big faces that are close to the camera.
|
|
|
|
\subsubsection{Utility for texture segments}
|
|
For a texture $T$ stored in a segment $s^T$, the triangles in $\Delta(T)$ are stored in arbitrary geometry segments, that is, they do not have spatial coherence.
|
|
Thus, for each $k^{th}$ downloaded geometry segment $s_k^G$, and total downloaded segment $K$ at time $t_i$, we collect the triangles of $\Delta(T, t_i)$ in $s^G_k$, and compute the ratio of $\mathcal{A}_{3D}(s_k^G)$ covered by these triangles.
|
|
So, we define the utility:
|
|
\begin{equation*}
|
|
\mathcal{U}\Big( s^T,v(t_i) \Big)
|
|
= psnr(s^T) \sum_{k\in K}\frac{\mathcal{A}_{3D}( s_k^G\cap \Delta(T,t_i))}{\mathcal{A}_{3D}(s_k^G)} \mathcal{U}\Big( s_k^G,v(t_i) \Big)
|
|
\end{equation*}
|
|
where we sum over all geometry segments received before time $t_i$ that intersect $\Delta(T,t_i)$ and such that the adaptation set it belongs to is in the frustum.
|
|
This formula defines the utility of a texture segment by computing the linear combination of the utility of the geometry segments that use this texture, weighted by the proportion of area covered by the texture in the segment.
|
|
We compute the PSNR by using the MSE in the MPD and denote it $psnr(s^T)$.
|
|
We do this to acknowledge the fact that a texture at a greater resolution has a higher utility than a lower resolution texture.
|
|
The equivalent term for geometry is 1 (and does not appear).
|
|
Having defined a utility on both geometry and texture segments, the client uses it next for its streaming strategy.
|
|
|
|
\subsection{DASH Adaptation Logic\label{d3:dash-adaptation}}
|
|
|
|
Along the camera path $C=\{v(t_i)\}$, viewpoints are indexed by a continuous time interval $t_i \in [t_1,t_{end}]$.
|
|
Contrastingly, the DASH adaptation logic proceeds sequentially along a discrete time line.
|
|
The first request \texttt{(HTTP request)} made by the DASH client at time $t_1$ selects the most useful segment $s_1^*$ to download and will be followed by subsequent decisions at $t_2, t_3, \dots$.
|
|
While selecting $s_i^*$, the i-th best segment to request, the adaptation logic compromises between geometry, texture, and the available \texttt{representations} given the current bandwidth, camera dynamics, and the previously described utility scores.
|
|
The difference between $t_{i+1}$ and $t_{i}$ is the $s_i^*$ delivery delay.
|
|
It varies with the segment size and network conditions.
|
|
Algorithm~\ref{d3:next-segment} details how our DASH client makes decisions.
|
|
|
|
|
|
|
|
\begin{algorithm}[th]
|
|
\SetKwInOut{Input}{input}
|
|
\SetKwInOut{Output}{output}
|
|
|
|
\SetKw{Continue}{continue}
|
|
\SetKwData{Bw}{bw\_estimation}
|
|
\SetKwData{Rtt}{rtt\_estimation}
|
|
\SetKwData{Segment}{best\_segment}
|
|
\SetKwData{Candidates}{candidates}
|
|
\SetKwData{AllSegments}{all\_segments}
|
|
\SetKwData{DownloadedSegments}{downloaded\_segments}
|
|
\SetKwData{Frustum}{frustum}
|
|
\SetKwFunction{Argmax}{argmax}
|
|
\SetKwFunction{Filter}{filter}
|
|
\SetKwFunction{EstimateNetwork}{estimate\_network\_parameters}
|
|
|
|
\Input{Current index $i$, time $t_i$, viewpoint $v(t_i)$, buffer of already downloaded \texttt{segments} $\mathcal{B}_i$, MPD}
|
|
\Output{Next segment $s^{*}_i$ to request, updated buffer $\mathcal{B}_{i+1}$}
|
|
(\Bw, \Rtt) \leftarrow{} \EstimateNetwork{}\;
|
|
|
|
\Candidates\leftarrow{}\AllSegments\newline\makebox[1cm]{}.\Filter{$\Segment\rightarrow\Segment\notin\DownloadedSegments\land\Segment\in\Frustum$}\;
|
|
\Segment\leftarrow{} \Argmax{\Candidates, \Segment\rightarrow{} $\Omega\left(\mathcal{U}(\Segment)\right)$}\;
|
|
{\caption{Algorithm to identify the next segment to query\label{d3:next-segment}}}
|
|
\end{algorithm}
|
|
|
|
The most naive way to sequentially optimize the $\mathcal{U}$ is to limit the decision-making to the current viewpoint $v(t_i)$.
|
|
In that case, the best segment $s$ to request would be the one maximizing $\mathcal{U}(s, v(t_i))$ to simply make a better rendering from the current viewpoint $v(t_i)$.
|
|
Due to transmission delay however, this segment will be only delivered at time $t_{i+1}=t_{i+1}(s)$ depending on the segment size and network conditions: \begin{equation*} t_{i+1}(s)=t_i+\frac{\mathtt{size}(s)}{\widehat{BW_i}} + \widehat{\tau_i}\label{d3:eq2}\end{equation*}
|
|
|
|
In consequence, the most useful segment from $v(t_i)$ at decision time $t_i$ might be less useful at delivery time from $v(t_{i+1})$.
|
|
|
|
A better solution is to download a segment that is expected to be the most useful in the future.
|
|
With a temporal horizon $\chi$, we can optimize the cumulated $\mathcal{U}$ over $[t_{i+1}(s), t_i+\chi]$:
|
|
|
|
\begin{equation}
|
|
s^*_i= \argmax{s \in \mathcal{S} \backslash \mathcal{B}_i \cap \mathcal{FC} } \int_{t_{i+1}(s)}^{t_i+\chi} \mathcal{U}(s,\hat{v}(t_i)) dt
|
|
\label{d3:smart}
|
|
\end{equation}
|
|
|
|
In our experiments, we typically use $\chi=2s$ and estimate the (\ref{d3:smart}) integral by a Riemann sum where the $[t_{i+1}(s), t_i+\chi]$ interval is divided in 4 subintervals of equal size.
|
|
For each subinterval extremity, an order 1 predictor $\hat{v}(t_i)$ linearly estimates the viewpoint based on $v(t_i)$ and speed estimation (discrete derivative at $t_i$).
|
|
|
|
We also tested an alternative greedy heuristic selecting the segment that optimizes an utility variation during downloading (between $t_i$ and $t_{i+1}$):
|
|
\begin{equation}
|
|
s^{\texttt{GREEDY}}_i= \argmax{s \in \mathcal{S} \backslash \mathcal{B}_i \cap \mathcal{FC}} \frac{\mathcal{U}\Big(s,\hat{v}(t_{i+1}(s))\Big)}{t_{i+1}(s) - t_i}
|
|
\label{d3:greedy}
|
|
\end{equation}
|
|
|
|
\fresh{}
|
|
|
|
\subsection{JavaScript client\label{d3:js-implementation}}
|
|
|
|
In order to be able to evaluate our system, we need to collect traces and perform analyses on them.
|
|
Since our scene is large, and since the system we are describing allows navigating in a streaming scene, we developed a web client that implements our utility metrics and policies.
|
|
|
|
\subsubsection{Media engine}
|
|
|
|
Of course, in this work, we are concerned about performance of our system, and we will not be able to use the normal geometries described in Section~\ref{f:geometries}.
|
|
However, the way our system works, the way changes happen to the 3D content is always the same: we only add faces and textures to the model.
|
|
Therefore, we made a class that derives BufferGeometry, and that makes it more convenient for us.
|
|
\begin{itemize}
|
|
\item It has a constructor that takes as parameter the number of faces: it allocates all the memory needed for our buffers so we do not have to reallocate it which would be inefficient.
|
|
\item It keeps track of the number of faces it is currently holding: it can then avoid rendering faces that have not been filled and knows where to put new faces.
|
|
\item It provides a method that adds a face to the geometry.
|
|
\item It also keeps track of what part of the buffers has been transmitted to the GPU\@: THREE.js allows us to set the range of the buffer that we want to update and we are able to update only what is necessary.
|
|
\end{itemize}
|
|
|
|
\paragraph{Our 3D model class.\label{d3:model-class}}
|
|
As said in the previous subsections, a geometry and a material a bound together in a mesh.
|
|
This means that we are forced to have has many meshes as there are materials in our model.
|
|
To make this easy to manage, we made a \textbf{Model} class, that holds everything we need.
|
|
We can add vertices, faces, and materials to this model, and it will internally deal with the right geometries, materials and meshes.
|
|
In order to avoid having many models that have the same material which would harm performance, it automatically merges faces that share the same material in the same buffer geometry, as shown in Figure~\ref{d3:render-structure}.
|
|
|
|
\begin{figure}[ht]
|
|
\centering
|
|
|
|
\begin{tikzpicture}
|
|
|
|
\node[align=center] at(1.5, -1) {DASH-3D\\Structure};
|
|
|
|
\node at(-1.5, -3) {\texttt{seg0.obj}};
|
|
\draw[fill=Pink] (0, -2) rectangle (3, -3);
|
|
\node at(1.5, -2.5) {Material 1};
|
|
\draw[fill=PaleGreen] (0, -3) rectangle (3, -4);
|
|
\node at(1.5, -3.5) {Material 2};
|
|
|
|
\node at(-1.5, -6) {\texttt{seg1.obj}};
|
|
\draw[fill=Pink] (0, -5) rectangle (3, -7);
|
|
\node at(1.5, -6) {Material 1};
|
|
|
|
\node at(-1.5, -9) {\texttt{seg2.obj}};
|
|
\draw[fill=PaleGreen] (0, -8) rectangle (3, -9);
|
|
\node at(1.5, -8.5) {Material 2};
|
|
\draw[fill=LightBlue] (0, -9) rectangle (3, -10);
|
|
\node at(1.5, -9.5) {Material 3};
|
|
|
|
\node[align=center] at (7.5, -1) {Renderer\\Structure};
|
|
|
|
\node at(10.5, -3.5) {Object 1};
|
|
\draw[fill=Pink] (6, -2) rectangle (9, -5);
|
|
\node at(7.5, -3.5) {Material 1};
|
|
|
|
\node at(10.5, -7) {Object 2};
|
|
\draw[fill=PaleGreen] (6, -6) rectangle (9, -8);
|
|
\node at(7.5, -7) {Material 2};
|
|
|
|
\node at(10.5, -9.5) {Object 3};
|
|
\draw[fill=LightBlue] (6, -9) rectangle (9, -10);
|
|
\node at(7.5, -9.5) {Material 3};
|
|
|
|
\node[minimum width=0.5cm,minimum height=2cm] (O1) at (6.25, -3.5) {};
|
|
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
|
|
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
|
|
|
|
\node[minimum width=0.5cm,minimum height=2cm] (O2) at (6.25, -7) {};
|
|
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
|
|
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
|
|
|
|
\node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
|
|
\draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
|
|
|
|
\node at (1.5, -10.75) {$\vdots$};
|
|
\node at (7.5, -10.75) {$\vdots$};
|
|
\end{tikzpicture}
|
|
|
|
\caption{Reordering of the content on the renderer\label{d3:render-structure}}
|
|
\end{figure}
|
|
|
|
\subsubsection{Access client}
|
|
|
|
In order to be able to implement our DASH-3D client, we need to implement the access client, which is responsible for deciding what to download and download it.
|
|
To do so, we use the strategy pattern, as shown in Figure~\ref{d3:dash-loader}.
|
|
We have a base class named \texttt{LoadingPolicy} that contain some attributes and functions to keep data about what has been downloaded that a derived class can use to make smart decisions, and exposes a function named \texttt{nextSegment} that takes two arguments:
|
|
\begin{itemize}
|
|
\item the MPD, so that a strategy can know all the metadata of the segments before making its decision;
|
|
\item the camera, because the best segment depends on the position of the camera.
|
|
\end{itemize}
|
|
|
|
The greedy, greedy predictive and proposed policies from the previous chapter are all classes that derive from \texttt{LoadingPolicy}.
|
|
Then, the main class responsible for the loading of segments is the \texttt{DashLoader} class.
|
|
It uses \texttt{XMLHttpRequest}s, which are the usual way of making HTTP requests in JavaScript, and it calls the corresponding parser on the results of those requests.
|
|
The \texttt{DashLoader} class accepts as parameter a function that will be called each time some data has been downloaded and parsed: this data can contain vertices, texture coordinates, normals, materials or textures, and they can all be added to the \texttt{Model} class that we described in Section~\ref{d3:model-class}.
|
|
|
|
\begin{figure}[ht]
|
|
\centering
|
|
\begin{tikzpicture}[scale=0.65]
|
|
\draw (0, 0) rectangle (5, -2.5);
|
|
\draw (0, -1) -- (5, -1);
|
|
\node at (2.5, -0.5) {DashClient};
|
|
\node[right] at (0, -1.5) {\scriptsize loadNextSegment()};
|
|
|
|
\draw (5, -1.25) -- (8, -1.25);
|
|
|
|
\draw (8, 0) rectangle (14, -2.5);
|
|
\draw (8, -1) -- (14, -1);
|
|
\node at (11, -0.5) {LoadingPolicy};
|
|
\node[right] at (8, -1.5) {\scriptsize nextSegment(mpd, camera)};
|
|
|
|
\begin{scope}[shift={(0, 1.5)}]
|
|
\draw (1, -6) rectangle (7, -8.5);
|
|
\draw (1, -7) -- (7, -7);
|
|
\node at (4, -6.5) {Greedy};
|
|
\node[right] at (1, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
|
|
|
\draw (8, -6) rectangle (14, -8.5);
|
|
\draw (8, -7) -- (14, -7);
|
|
\node at (11, -6.5) {GreedyPredictive};
|
|
\node[right] at (8, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
|
|
|
\draw (15, -6) rectangle (21, -8.5);
|
|
\draw (15, -7) -- (21, -7);
|
|
\node at (18, -6.5) {Proposed};
|
|
\node[right] at (15, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
|
|
|
\draw[-{Triangle[open, length=3mm, width=3mm]}] (4, -6) -- (4, -5) -- (11, -5) -- (11, -4);
|
|
\draw (11, -6) -- (11, -5) -- (8, -5);
|
|
\draw (18, -6) -- (18, -5) -- (8, -5);
|
|
\end{scope}
|
|
\end{tikzpicture}
|
|
\caption{Class diagram of our DASH client\label{d3:dash-loader}}
|
|
\end{figure}
|
|
|
|
\subsubsection{Performance}
|
|
|
|
In JavaScript, there is no way of doing parallel computing without using \emph{web workers}.
|
|
A web worker is a script in JavaScript that runs in the background, on a separate thread and that can communicate with the main script by sending and receiving messages.
|
|
Since our system has many tasks to do, it seems natural to use workers to manage the streaming without impacting the framerate of the renderer.
|
|
However, what a worker can do is very limited, since it cannot access the variables of the main script.
|
|
Because of this, we are forced to run the renderer on the main script, where it can access the HTML page, and we move all the other tasks to the worker (the access client, the control engine and the segment parsers), and since the main script is the one communicating with the GPU, it will still have to update the model with the parsed content it receives from the worker.
|
|
Using the worker does not so much improve the framerate of the system, but it reduces the latency that occurs when receiving a new segment, which can be very frustrating since in a single thread scenario, each time a segment is received, the interface freezes for around half a second.
|
|
A sequence diagram of what happens when downloading, parsing and rendering content is shown in Figure~\ref{d3:sequence}.
|
|
|
|
\begin{figure}[ht]
|
|
|
|
\centering
|
|
|
|
\begin{tikzpicture}
|
|
\node at(0, 1) {Main script};
|
|
\draw[->, color=LightGray] (0, 0.5) -- (0, -17.5);
|
|
|
|
\node at(2.5, 1) {Worker};
|
|
\draw[->, color=LightGray] (2.5, 0.5) -- (2.5, -17.5);
|
|
|
|
\node at(10, 1) {Server};
|
|
\draw[->, color=LightGray] (10, 0.5) -- (10, -17.5);
|
|
|
|
% MPD
|
|
\draw[color=blue] (0, 0) -- (2.5, -0.1) -- (10, -0.5);
|
|
\draw[color=blue, fill=PaleLightBlue] (10, -0.5) -- (10, -1.5) -- (2.5, -2) -- (2.5, -1) -- cycle;
|
|
\node[color=blue, rotate=5] at (6.25, -1.25) {Download};
|
|
\node[color=blue, above] at(1.25, 0.0) {Ask MPD};
|
|
\draw[color=blue, fill=LightBlue] (2.375, -2) rectangle (2.625, -2.9);
|
|
\node[color=blue, right=0.2cm] at(2.5, -2.45) {Parse MPD};
|
|
\draw[color=blue] (2.5, -2.9) -- (0, -3);
|
|
\draw[color=blue, fill=LightBlue] (-0.125, -3.0) rectangle(0.125, -3.5);
|
|
\node[color=blue, left=0.2cm] at (0.0, -3.25) {Update model};
|
|
|
|
% Ask segments
|
|
\begin{scope}[shift={(0, -3.5)}]
|
|
\draw[color=red] (0, 0) -- (2.5, -0.1);
|
|
\node[color=red, above] at(1.25, 0.0) {Ask segment};
|
|
|
|
\draw[color=red, fill=Pink] (2.375, -0.1) rectangle (2.625, -1);
|
|
\node[color=red, right=0.2cm] at (2.5, -0.55) {Compute utilities};
|
|
|
|
\draw[color=red] (2.5, -1) -- (10, -1.5);
|
|
\draw[color=red, fill=PalePink] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
|
|
\node[color=red, rotate=5] at (6.25, -2.25) {Download};
|
|
|
|
\draw[color=red, fill=Pink] (2.375, -3) rectangle (2.625, -3.9);
|
|
\node[color=red, right=0.2cm] at(2.5, -3.45) {Parse segment};
|
|
|
|
\draw[color=red] (2.5, -3.9) -- (0, -4);
|
|
\draw[color=red, fill=Pink] (-0.125, -4.0) rectangle(0.125, -4.5);
|
|
\node[color=red, left=0.2cm] at (0.0, -4.25) {Update model};
|
|
\end{scope}
|
|
|
|
% Ask more segments
|
|
\begin{scope}[shift={(0, -8)}]
|
|
\draw[color=DarkGreen] (0, 0) -- (2.5, -0.1);
|
|
\node[color=DarkGreen, above] at(1.25, 0.0) {Ask segment};
|
|
|
|
\draw[color=DarkGreen, fill=PaleGreen] (2.375, -0.1) rectangle (2.625, -1);
|
|
\node[color=DarkGreen, right=0.2cm] at (2.5, -0.55) {Compute utilities};
|
|
|
|
\draw[color=DarkGreen] (2.5, -1) -- (10, -1.5);
|
|
\draw[color=DarkGreen, fill=PalePaleGreen] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
|
|
\node[color=DarkGreen, rotate=5] at (6.25, -2.25) {Download};
|
|
|
|
\draw[color=DarkGreen, fill=PaleGreen] (2.375, -3) rectangle (2.625, -3.9);
|
|
\node[color=DarkGreen, right=0.2cm] at(2.5, -3.45) {Parse segment};
|
|
|
|
\draw[color=DarkGreen] (2.5, -3.9) -- (0, -4);
|
|
\draw[color=DarkGreen, fill=PaleGreen] (-0.125, -4.0) rectangle(0.125, -4.5);
|
|
\node[color=DarkGreen, left=0.2cm] at (0.0, -4.25) {Update model};
|
|
\end{scope}
|
|
|
|
% Ask even more segments
|
|
\begin{scope}[shift={(0, -12.5)}]
|
|
\draw[color=purple] (0, 0) -- (2.5, -0.1);
|
|
\node[color=purple, above] at(1.25, 0.0) {Ask segment};
|
|
|
|
\draw[color=purple, fill=Plum] (2.375, -0.1) rectangle (2.625, -1);
|
|
\node[color=purple, right=0.2cm] at (2.5, -0.55) {Compute utilities};
|
|
|
|
\draw[color=purple] (2.5, -1) -- (10, -1.5);
|
|
\draw[color=purple, fill=PalePlum] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
|
|
\node[color=purple, rotate=5] at (6.25, -2.25) {Download};
|
|
|
|
\draw[color=purple, fill=Plum] (2.375, -3) rectangle (2.625, -3.9);
|
|
\node[color=purple, right=0.2cm] at(2.5, -3.45) {Parse segment};
|
|
|
|
\draw[color=purple] (2.5, -3.9) -- (0, -4);
|
|
\draw[color=purple, fill=Plum] (-0.125, -4.0) rectangle(0.125, -4.5);
|
|
\node[color=purple, left=0.2cm] at (0.0, -4.25) {Update model};
|
|
\end{scope}
|
|
|
|
\foreach \x in {0,...,5}
|
|
{
|
|
\draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2) rectangle (0.125, -\x/2-0.5);
|
|
}
|
|
\node[color=Goldenrod, left=0.2cm] at (0.0, -1.5) {Render};
|
|
|
|
\foreach \x in {0,...,7}
|
|
{
|
|
\draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-3.5) rectangle (0.125, -\x/2-4);
|
|
\draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-8) rectangle (0.125, -\x/2-8.5);
|
|
\draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-12.5) rectangle (0.125, -\x/2-13);
|
|
}
|
|
\node[color=Goldenrod, left=0.2cm] at (0.0, -5.5) {Render};
|
|
\node[color=Goldenrod, left=0.2cm] at (0.0, -10) {Render};
|
|
\node[color=Goldenrod, left=0.2cm] at (0.0, -14.5) {Render};
|
|
|
|
\end{tikzpicture}
|
|
|
|
\caption{Repartition of the tasks on the main script and the worker\label{d3:sequence}}
|
|
|
|
\end{figure}
|
|
|
|
|
|
\subsection{Rust client\label{d3i:rust-implementation}}
|
|
|
|
However, a web client is not sufficient to analyse our streaming policies: many tasks are performed (such as rendering, and managing the interaction) and all this overhead pollutes the analysis of our policies.
|
|
This is why we also implemented a client in Rust, for simulation, so we can gather precise simulated data.
|
|
|
|
Our requirements are quite different that the ones we had to deal with in our JavaScript implementation.
|
|
In this setup, we want to build a system that is the closest to our theoretical concepts.
|
|
Therefore, we do not have a full client in Rust (meaning an application to which you would give the URL to an MPD file and that would allow you to navigate in the scene while it is being downloaded).
|
|
In order to be able to run simulations, we develop the bricks of the DASH client separately: the access client and the media engine are totally separated:
|
|
\begin{itemize}
|
|
\item the \textbf{simulator} takes a user trace as a parameter, it then replays the trace using specific parameters of the access client and outputs a file containing the history of the simulation (what files have been downloaded, and when);
|
|
\item the \textbf{renderer} takes the user trace as well as the history generated by the simulator as parameters, and renders images that correspond to what would have been seen.
|
|
\end{itemize}
|
|
When simulating experiments, we will run the simulator on many traces that we collected during user-studies, and we will then run the renderer program on it to generate images corresponding to the simulation.
|
|
We are then able to compute PSNR between those frames and the ground truth frames.
|
|
Doing so guarantees us that our simulator is not affected by the performances of our renderer.
|