phd-typst/dash-3d/client.typ

499 lines
29 KiB
Plaintext
Raw Normal View History

2023-05-15 09:34:10 +02:00
== Client<d3:dash-client>
In this section, we specify a DASH NVE client which exploits the preparation of the 3D content in an NVE for streaming.
The generated MPD file describes the content organization so that the client gets all the necessary information to make educated decisions and query the 3D content it needs according to the available resources and current viewpoint.
A camera path generated by a particular user is a set of viewpoint $v(t_i)$ indexed by a continuous time interval $t_i in [t_1,t_"end"]$.
All DASH clients are built from the same basic bricks, as shown in @d3:dash-scheme:
- the _access client_, which is the module that deals with making HTTP requests and receiving responses;
- the _segment parsers_, which decode the data downloaded by the access client, whether it be materials, geometry or textures;
- the _control engine_, which analyses the bandwidth to dynamically adapt to it;
- the _media engine_, which renders the multimedia content and the user interface to the screen.
#figure(
[TODO],
caption: [DASH client-server architecture]
)<d3:dash-scheme>
// \begin{figure}[ht]
// \centering
// \begin{tikzpicture}
//
// % Server
// \draw[rounded corners=5pt,fill=Pink] (-10, 0) rectangle (-3, 7.5);
// \node at (-9, 7) {Server};
//
// % Segments
// \begin{scope}[shift={(0.5,0.5)}]
// \foreach \x in {0,...,3}
// {
// \draw [fill=Bisque](\x/2-7.5, 1.5-\x/2) rectangle (\x/2-5.5, 6-\x/2);
// \node at (\x/2-6.5, 5.5-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
// \node at (\x/2-6.5, 4.75-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
// \draw [fill=LightBlue] (\x/2-6.5, 3.825-\x/2) circle (2pt) {};
// \draw [fill=LightBlue] (\x/2-6.5, 3.325 -\x/2) circle (2pt) {};
// \draw [fill=LightBlue] (\x/2-6.5, 2.825 -\x/2) circle (2pt) {};
// \node at (\x/2-6.5, 2-\x/2) {\fcolorbox{black}{LightBlue}{Segment}};
// }
// \end{scope}
//
// % MPD
// \draw[fill=LightBlue] (-9.5, 6.5) rectangle (-7.5, 0.5);
// \node at(-8.5, 3.5) {MPD};
//
// % Client
// \draw[rounded corners=5pt, fill=LemonChiffon] (-2, 0) rectangle (3, 7.5);
// \node at (-0.5, 7) {DASH client};
//
// % Access client
// \draw[fill=PaleGreen] (-1.5, 0.5) rectangle (2.5, 1.5);
// \node at (0.5, 1) {Access Client};
//
// % Media engine
// \draw[fill=PaleGreen] (-1.5, 5.5) rectangle (2.5, 6.5);
// \node at (0.5, 6) {Media Engine};
//
// % Control engine
// \draw[fill=PaleGreen] (-1.5, 2) rectangle (0.25, 5);
// \node[align=center] at (-0.625, 3.5) {Control \\ Engine};
//
// % Segment parser
// \draw[fill=PaleGreen] (0.75, 2) rectangle (2.5, 5);
// \node[align=center] at (1.625, 3.5) {Segment \\ Parser};
//
// % Access client to server
// \draw[double arrow=5pt colored by RoyalBlue and white] (-3.25, 1.0) -- (-1.0, 1.0);
//
// % Access client to control engine
// \draw[double ended double arrow=5pt colored by RoyalBlue and white] (-0.625, 1.25) -- (-0.625, 2.5);
//
// % Acces client to segment parser
// \draw[double arrow=5pt colored by RoyalBlue and white] (1.625, 1.25) -- (1.625, 2.5);
//
// % Segment parser to media engine
// \draw[double arrow=5pt colored by RoyalBlue and white] (1.625, 4.5) -- (1.625, 5.75);
//
// \end{tikzpicture}
// \caption{DASH client-server architecture\label{d3:dash-scheme}}
// \end{figure}
The DASH client first downloads the MPD file to get the material file containing information about all the geometry and textures available for the entire 3D model.
At time instance $t_i$, the DASH client decides to download the appropriate segments containing the geometry and the texture to generate the viewpoint $v(t_(i+1))$ for the time instance $t_(i+1)$.
Starting from $t_1$, the camera continuously follows a camera path $C=\{v(t_i), t_i in [t_1,t_"end"]\}$, along which downloading opportunities are strategically exploited to sequentially query the most useful segments.
=== Segment utility<d3:utility>
Unlike video streaming, where the bitrate of each segment correlates with the quality of the video received, for 3D content, the size (in bytes) of the content does not necessarily correlate well to its contribution to visual quality.
A large polygon with huge visual impact takes the same number of bytes as a tiny polygon.
Further, the visual impact is _view dependent_ --- a large object that is far away or out of view does not contribute to the visual quality as much as a smaller object that is closer to the user.
As such, it is important for a DASH-based NVE client to estimate the usefulness of a given segment to download, so that it can make good decisions about what to download.
We call this usefulness the _utility_ of the segment.
The utility is a function of a segment, either geometry or texture, and the current viewpoint (camera location, view angle, and look-at point), and is therefore dynamically computed online by the client from parameters in the MPD file.
=== Offline parameters
Let us detail first, all parameters available from the offline/static preparation of the 3D NVE.
These parameters are stored in the MPD file.
First, for each geometry segment $s^G$ there is a predetermined 3D area $cal(A)(s^G)$, equal to the sum of all triangle areas in this segment (in 3D); it is computed as the segments are created.
Note that the texture segments have similar information, but computed at _navigation time_ $t_i$.
The second information stored in the MPD for all segments, geometry, and texture, is the size of the segment (in kB).
Finally, for each texture segment $s^T$, the MPD stores the _MSE_ (mean square error) of the image and resolution, relative to the highest resolution (by default, triangles are filled with its average color).
Offline parameters are stored in the MPD as shown in Snippet~\ref{d3:mpd}.
=== Online parameters
In addition to the offline parameters stored in the MPD file for each segment, view-dependent parameters are computed at navigation time.
First, a measure of 3D area is computed for texture segments.
As a texture maps on a set of triangles, we account for the area in 3D of all these triangles.
We could consider such an offline measure (attached to the adaptation set containing the texture), but we prefer to only account for the triangles that have been already downloaded by the client.
We call the set of triangles colored by a texture $T$: $Delta(s^T)=Delta(T)$ (depending only on $T$ and equal for any representation/segment $s^T$ in this texture adaptation set).
At each time $t_i$, a subset of $Delta(T)$ has been downloaded; we denote it $Delta(T, t_i)$.
Moreover, each geometry segment belongs to a geometry adaptation set $"AS"^G$ whose bounding box coordinates are stored in the MPD.
Given the coordinates of the bounding box $cal("BB")("AS"^G)$ and the viewpoint $v(t_i)$ at time $t_i$, the client
computes the distance $cal(D)(v(t_i),"AS"^G)$ of the bounding box $cal("BB")("AS"^G)$ as the distance from the center of $cal("BB")("AS"^G)$ to the principal point of the camera, given in $v(t_i)$.
=== Utility for geometry segments
We now have all parameters to derive a utility measure of a geometry segment.
Utility for texture segments follows from the geometric utility.
The utility of a geometric segment $s^G$ for a viewpoint $v(t_i)$ is:
// \begin{equation}
// \mathcal{U} \Big(s^G,v(t_i) \Big) = \frac{\mathcal{A}(s^G)}{\mathcal{D}{\left(v{(t_i)},AS^G\right)}^2}
// \end{equation}
where $"AS"^G$ is the adaptation set containing $s^G$.
Basically, the utility of a segment is proportional to the area that its faces cover, and inversely proportional to the square of the distance between the camera and the center of the bounding box of the adaptation set containing the segment.
That way, we favor segments with big faces that are close to the camera.
=== Utility for texture segments
For a texture $T$ stored in a segment $s^T$, the triangles in $Delta(T)$ are stored in arbitrary geometry segments, that is, they do not have spatial coherence.
Thus, for each $k^"th"$ downloaded geometry segment $s_k^G$, and total downloaded segment $K$ at time $t_i$, we collect the triangles of $Delta(T, t_i)$ in $s^G_k$, and compute the ratio of $cal(A)(s_k^G)$ covered by these triangles.
So, we define the utility:
// $
// \begin{equation}
// \mathcal{U}\Big( s^T,v(t_i) \Big)
// = psnr(s^T) \sum_{k\in K}\frac{\mathcal{A}( s_k^G\cap \Delta(T,t_i))}{\mathcal{A}(s_k^G)} \mathcal{U}\Big( s_k^G,v(t_i) \Big)
// \end{equation}
// $
where we sum over all geometry segments received before time $t_i$ that intersect $Delta(T,t_i)$ and such that the adaptation set it belongs to is in the frustum.
This formula defines the utility of a texture segment by computing the linear combination of the utility of the geometry segments that use this texture, weighted by the proportion of area covered by the texture in the segment.
We compute the PSNR by using the MSE in the MPD and denote it $"psnr"(s^T)$.
We do this to acknowledge the fact that a texture at a greater resolution has a higher utility than a lower resolution texture.
The equivalent term for geometry is 1 (and does not appear).
Having defined a utility on both geometry and texture segments, the client uses it next for its streaming strategy.
=== DASH adaptation logic<d3:dash-adaptation>
Along the camera path $C=\{v(t_i)\}$, viewpoints are indexed by a continuous time interval $t_i \in [t_1,t_"end"]$.
Contrastingly, the DASH adaptation logic proceeds sequentially along a discrete time line.
The first HTTP request made by the DASH client at time $t_1$ selects the most useful segment $s_1^a$ to download and will be followed by subsequent decisions at $t_2, t_3, dots.h$.
While selecting $s_i^a$, the $i^"th"$ best segment to request, the adaptation logic compromises between geometry, texture, and the available representations given the current bandwidth, camera dynamics, and the previously described utility scores.
The difference between $t_(i+1)$ and $t_i$ is the $s_i^a$ delivery delay.
It varies with the segment size and network conditions.
Algorithm~\ref{d3:next-segment} details how our DASH client makes decisions.
// \begin{algorithm}[th]
// \SetKwInOut{Input}{input}
// \SetKwInOut{Output}{output}
//
// \SetKw{Continue}{continue}
// \SetKwData{Bw}{bw\_estimation}
// \SetKwData{Rtt}{rtt\_estimation}
// \SetKwData{Segment}{best\_segment}
// \SetKwData{CurrentSegment}{segment}
// \SetKwData{Candidates}{candidates}
// \SetKwData{AllSegments}{all\_segments}
// \SetKwData{DownloadedSegments}{downloaded\_segments}
// \SetKwData{Frustum}{frustum}
// \SetKwFunction{Argmax}{argmax}
// \SetKwFunction{Filter}{filter}
// \SetKwFunction{EstimateNetwork}{estimate\_network\_parameters}
// \SetKwFunction{Append}{append}
//
// \Input{Current index $i$, time $t_i$, viewpoint $v(t_i)$, buffer of already downloaded \texttt{segments} $\mathcal{B}_i$, MPD, utility metric $\mathcal{U}$, streaming policy $\Omega$}
// \Output{Next segment $s^a_i$ to request, updated buffer $\mathcal{B}_{i+1}$}
// \BlankLine{}
// (\Bw, \Rtt) \leftarrow{} \EstimateNetwork{}\;
//
// \BlankLine{}
// \Candidates\leftarrow{} \AllSegments\newline\makebox[1cm]{}.\Filter{$\CurrentSegment\rightarrow\CurrentSegment\notin\DownloadedSegments$}\newline\makebox[1cm]{}.\Filter{$\CurrentSegment\rightarrow\CurrentSegment\in\Frustum$}\;
// \BlankLine{}
// \Segment\leftarrow{} \Argmax{\Candidates, \CurrentSegment\rightarrow{} $\Omega\left(\mathcal{U},\CurrentSegment\right)$}\;
// \DownloadedSegments.\Append{\Segment}\;
// {\caption{Algorithm to identify the next segment to query\label{d3:next-segment}}}
// \end{algorithm}
A naive way to sequentially optimize the utility $cal(U)$ is to limit the decision-making to the current viewpoint $v(t_i)$.
In that case, the best segment $s$ to request would be the one maximizing $cal(U)(s, v(t_i))$ to simply make a better rendering from the current viewpoint $v(t_i)$.
Due to transmission delay however, this segment will be only delivered at time $t_(i+1)=t_(i+1)(s)$ depending on the
segment size and network conditions:
// $ \begin{equation} t_{i+1}(s)=t_i+\frac{\mathtt{size}(s)}{\widehat{BW_i}} + \widehat{\tau_i}\label{d3:eq2}\end{equation} $
In consequence, the most useful segment from $v(t_i)$ at decision time $t_i$ might be less useful at delivery time from $v(t_(i+1))$.
A better solution is to download a segment that is expected to be the most useful in the future.
With a temporal horizon $chi$, we can optimize the cumulated $cal(U)$ over $[t_(i+1)(s), t_i+chi]$:
// $
// \begin{equation}
// s^a_i= \argmax{s \in \mathcal{S} \backslash \mathcal{B}_i \cap \mathcal{FC} } \int_{t_{i+1}(s)}^{t_i+\chi} \mathcal{U}(s,\hat{v}(t_i)) dt
// \label{d3:smart}
// \end{equation}
// $
In our experiments, we typically use $chi=2s$ and estimate the (\ref{d3:smart}) integral by a Riemann sum where the $[t_(i+1)(s), t_i+chi]$ interval is divided in 4 subintervals of equal size.
For each subinterval extremity, an order 1 predictor $"hat"{v}(t_i)$ linearly estimates the viewpoint based on $v(t_i)$ and speed estimation (discrete derivative at $t_i$).
We also tested an alternative greedy heuristic selecting the segment that optimizes an utility variation during downloading (between $t_i$ and $t_(i+1)$):
// $
// \begin{equation}
// s^{\texttt{GREEDY}}_i= \argmax{s \in \mathcal{S} \backslash \mathcal{B}_i \cap \mathcal{FC}} \frac{\mathcal{U}\Big(s,\hat{v}(t_{i+1}(s))\Big)}{t_{i+1}(s) - t_i}
// \label{d3:greedy}
// \end{equation}
// $
2023-05-26 14:59:56 +02:00
=== JavaScript client<d3:js-implementation>
2023-05-15 09:34:10 +02:00
In order to be able to evaluate our system, we need to collect traces and perform analyses on them.
Since our scene is large, and since the system we are describing allows navigating in a streaming scene, we developed a JavaScript web client that implements our utility metrics and policies.
2023-05-26 14:59:56 +02:00
==== Media engine
2023-05-15 09:34:10 +02:00
Performance of our system is a key aspect in our work; as such, we can not use the default geometries described in Section~\ref{f:geometries} because of their poor performance, and we instead use buffer geometries.
However, in our system, the way changes happen to the 3D content is always the same: we only add faces and textures to the model.
We therefore implemented a class that derives `BufferGeometry`, for more convenience.
- It has a constructor that takes as parameter the number of faces: it allocates all the memory needed for our buffers so we do not have to reallocate it later (which would be inefficient).
- It keeps track of the number of faces it is currently holding: it can then avoid rendering faces that have not been filled and knows where to add new faces.
- It provides a method to add a new polygon to the geometry.
- It also keeps track of what part of the buffers has been transmitted to the GPU: THREE.js allows us to set the range of the buffer that we want to update, and we are able to update only what is necessary.
2023-05-26 14:59:56 +02:00
==== Our 3D model class<d3:model-class>
2023-05-15 09:34:10 +02:00
As said in the previous subsections, a geometry and a material are bound together in a mesh.
This means that we are forced to have as many meshes as there are materials in our model.
To make this easy to manage, we implemented a *Model* class, that holds both geometry and textures.
We can add vertices, faces, and materials to this model, and it internally manages the right geometries, materials and meshes.
In order to avoid having many models that share the same material (which would harm performance), it automatically merges faces that share the same material in the same buffer geometry, as shown in @d3:render-structure.
#figure(
[TODO],
caption: [Reordering of the content on the renderer]
)<d3:render-structure>
// \begin{figure}[ht]
// \centering
//
// \begin{tikzpicture}
//
// \node[align=center] at(1.5, -1) {DASH-3D\\Structure};
//
// \node at(-1.5, -3) {\texttt{seg0.obj}};
// \draw[fill=Pink] (0, -2) rectangle (3, -3);
// \node at(1.5, -2.5) {Material 1};
// \draw[fill=PaleGreen] (0, -3) rectangle (3, -4);
// \node at(1.5, -3.5) {Material 2};
//
// \node at(-1.5, -6) {\texttt{seg1.obj}};
// \draw[fill=Pink] (0, -5) rectangle (3, -7);
// \node at(1.5, -6) {Material 1};
//
// \node at(-1.5, -9) {\texttt{seg2.obj}};
// \draw[fill=PaleGreen] (0, -8) rectangle (3, -9);
// \node at(1.5, -8.5) {Material 2};
// \draw[fill=LightBlue] (0, -9) rectangle (3, -10);
// \node at(1.5, -9.5) {Material 3};
//
// \node[align=center] at (7.5, -1) {Renderer\\Structure};
//
// \node at(10.5, -3.5) {Object 1};
// \draw[fill=Pink] (6, -2) rectangle (9, -5);
// \node at(7.5, -3.5) {Material 1};
//
// \node at(10.5, -7) {Object 2};
// \draw[fill=PaleGreen] (6, -6) rectangle (9, -8);
// \node at(7.5, -7) {Material 2};
//
// \node at(10.5, -9.5) {Object 3};
// \draw[fill=LightBlue] (6, -9) rectangle (9, -10);
// \node at(7.5, -9.5) {Material 3};
//
// \node[minimum width=0.5cm,minimum height=2cm] (O1) at (6.25, -3.5) {};
// \draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
// \draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
//
// \node[minimum width=0.5cm,minimum height=2cm] (O2) at (6.25, -7) {};
// \draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
// \draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
//
// \node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
// \draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
//
// \node at (1.5, -10.75) {$\vdots$};
// \node at (7.5, -10.75) {$\vdots$};
// \end{tikzpicture}
//
// \caption{\label{d3:render-structure}}
// \end{figure}
2023-05-26 14:59:56 +02:00
==== Access client
2023-05-15 09:34:10 +02:00
In order to be able to implement our view-dependent DASH-3D client, we need to implement the access client, which is responsible for deciding what to download and for downloading it.
To do so, we use the strategy pattern illustrated in @d3:dash-loader.
We maintain a base class named `LoadingPolicy` that contain some attributes and functions to keep track of what has been downloaded.
This class exposes a function named `nextSegment` that takes two arguments:
- the MPD, so that a strategy can know all the metadata of the segments before making its decision;
- the camera, because the next best segment depends on the camera position.
The greedy and proposed policies from the previous chapter are all classes that derive from `LoadingPolicy`.
Then, the main class responsible for the loading of segments is the `DashLoader` class.
It uses `XMLHttpRequest`s, which are the usual way of making HTTP requests in JavaScript, and it calls the corresponding parser on the results of those requests.
The `DashLoader` class accepts as parameter a function that will be called each time some data has been downloaded and
parsed: this data can contain vertices, texture coordinates, normals, materials or textures, and they can all be added to the `Model` class that we described in @d3:model-class.
#figure(
[TODO],
caption: [Class diagram of our DASH client]
)<d3:dash-loader>
// \begin{figure}[ht]
// \centering
// \begin{tikzpicture}[scale=0.65]
// \draw (0, 0) rectangle (5, -2.5);
// \draw (0, -1) -- (5, -1);
// \node at (2.5, -0.5) {DashLoader};
// \node[right] at (0, -1.5) {\scriptsize loadNextSegment()};
//
// \draw (5, -1.25) -- (8, -1.25);
//
// \draw (8, 0) rectangle (14, -2.5);
// \draw (8, -1) -- (14, -1);
// \node at (11, -0.5) {LoadingPolicy};
// \node[right] at (8, -1.5) {\scriptsize nextSegment(mpd, camera)};
//
// \begin{scope}[shift={(0, 1.5)}]
// \begin{scope}[shift={(3, 0)}]
// \draw (1, -6) rectangle (7, -8.5);
// \draw (1, -7) -- (7, -7);
// \node at (4, -6.5) {Greedy};
// \node[right] at (1, -7.5) {\scriptsize nextSegment(mpd, camera)};
// \end{scope}
//
// \begin{scope}[shift={(-3, 0)}]
// \draw (15, -6) rectangle (21, -8.5);
// \draw (15, -7) -- (21, -7);
// \node at (18, -6.5) {Proposed};
// \node[right] at (15, -7.5) {\scriptsize nextSegment(mpd, camera)};
// \end{scope}
//
// \draw[-{Triangle[open, length=3mm, width=3mm]}] (7, -6) -- (7, -5) -- (11, -5) -- (11, -4);
// \draw (15, -6) -- (15, -5) -- (8, -5);
// \end{scope}
// \end{tikzpicture}
// \caption{\label{d3:dash-loader}}
// \end{figure}
2023-05-26 14:59:56 +02:00
==== Performance
2023-05-15 09:34:10 +02:00
JavaScript requires the use of _web workers_ to perform parallel computing.
A web worker is a script in JavaScript that runs in the background, on a separate thread and that can communicate with the main script by sending and receiving messages.
Since our system has many tasks to perform, it is natural to use workers to manage the streaming without impacting the framerate of the renderer.
However, what a worker can do is very limited, since it cannot access the variables of the main script.
Because of this, we are forced to run the renderer on the main script, where it can access the HTML page, and we move all the other tasks (i.e.\ the access client, the control engine and the segment parsers) to the worker.
Since the main script is the only thread communicating with the GPU, it will still have to update the model with the parsed content it receives from the worker.
We do not use web workers to improve the framerate of the system, but rather to reduce the latency that occurs when receiving a new segment, which can be frustrating in a single thread scenario, since each time a segment is received, the interface would freeze for around half a second.
A sequence diagram of what happens when downloading, parsing and rendering content is shown in @d3:sequence.
#figure(
[TODO],
caption: [Repartition of the tasks on the main script and the worker],
)<d3:sequence>
// \begin{figure}[ht]
//
// \centering
//
// \begin{tikzpicture}
// \node at(0, 1) {Main script};
// \draw[->, color=LightGray] (0, 0.5) -- (0, -17.5);
//
// \node at(2.5, 1) {Worker};
// \draw[->, color=LightGray] (2.5, 0.5) -- (2.5, -17.5);
//
// \node at(10, 1) {Server};
// \draw[->, color=LightGray] (10, 0.5) -- (10, -17.5);
//
// % MPD
// \draw[color=blue] (0, 0) -- (2.5, -0.1) -- (10, -0.5);
// \draw[color=blue, fill=PaleLightBlue] (10, -0.5) -- (10, -1.5) -- (2.5, -2) -- (2.5, -1) -- cycle;
// \node[color=blue, rotate=5] at (6.25, -1.25) {Download};
// \node[color=blue, above] at(1.25, 0.0) {Ask MPD};
// \draw[color=blue, fill=LightBlue] (2.375, -2) rectangle (2.625, -2.9);
// \node[color=blue, right=0.2cm] at(2.5, -2.45) {Parse MPD};
// \draw[color=blue] (2.5, -2.9) -- (0, -3);
// \draw[color=blue, fill=LightBlue] (-0.125, -3.0) rectangle(0.125, -3.5);
// \node[color=blue, left=0.2cm] at (0.0, -3.25) {Update model};
//
// % Ask segments
// \begin{scope}[shift={(0, -3.5)}]
// \draw[color=red] (0, 0) -- (2.5, -0.1);
// \node[color=red, above] at(1.25, 0.0) {Ask segment};
//
// \draw[color=red, fill=Pink] (2.375, -0.1) rectangle (2.625, -1);
// \node[color=red, right=0.2cm] at (2.5, -0.55) {Compute utilities};
//
// \draw[color=red] (2.5, -1) -- (10, -1.5);
// \draw[color=red, fill=PalePink] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
// \node[color=red, rotate=5] at (6.25, -2.25) {Download};
//
// \draw[color=red, fill=Pink] (2.375, -3) rectangle (2.625, -3.9);
// \node[color=red, right=0.2cm] at(2.5, -3.45) {Parse segment};
//
// \draw[color=red] (2.5, -3.9) -- (0, -4);
// \draw[color=red, fill=Pink] (-0.125, -4.0) rectangle(0.125, -4.5);
// \node[color=red, left=0.2cm] at (0.0, -4.25) {Update model};
// \end{scope}
//
// % Ask more segments
// \begin{scope}[shift={(0, -8)}]
// \draw[color=DarkGreen] (0, 0) -- (2.5, -0.1);
// \node[color=DarkGreen, above] at(1.25, 0.0) {Ask segment};
//
// \draw[color=DarkGreen, fill=PaleGreen] (2.375, -0.1) rectangle (2.625, -1);
// \node[color=DarkGreen, right=0.2cm] at (2.5, -0.55) {Compute utilities};
//
// \draw[color=DarkGreen] (2.5, -1) -- (10, -1.5);
// \draw[color=DarkGreen, fill=PalePaleGreen] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
// \node[color=DarkGreen, rotate=5] at (6.25, -2.25) {Download};
//
// \draw[color=DarkGreen, fill=PaleGreen] (2.375, -3) rectangle (2.625, -3.9);
// \node[color=DarkGreen, right=0.2cm] at(2.5, -3.45) {Parse segment};
//
// \draw[color=DarkGreen] (2.5, -3.9) -- (0, -4);
// \draw[color=DarkGreen, fill=PaleGreen] (-0.125, -4.0) rectangle(0.125, -4.5);
// \node[color=DarkGreen, left=0.2cm] at (0.0, -4.25) {Update model};
// \end{scope}
//
// % Ask even more segments
// \begin{scope}[shift={(0, -12.5)}]
// \draw[color=purple] (0, 0) -- (2.5, -0.1);
// \node[color=purple, above] at(1.25, 0.0) {Ask segment};
//
// \draw[color=purple, fill=Plum] (2.375, -0.1) rectangle (2.625, -1);
// \node[color=purple, right=0.2cm] at (2.5, -0.55) {Compute utilities};
//
// \draw[color=purple] (2.5, -1) -- (10, -1.5);
// \draw[color=purple, fill=PalePlum] (10, -1.5) -- (10, -2.5) -- (2.5, -3) -- (2.5, -2) -- cycle;
// \node[color=purple, rotate=5] at (6.25, -2.25) {Download};
//
// \draw[color=purple, fill=Plum] (2.375, -3) rectangle (2.625, -3.9);
// \node[color=purple, right=0.2cm] at(2.5, -3.45) {Parse segment};
//
// \draw[color=purple] (2.5, -3.9) -- (0, -4);
// \draw[color=purple, fill=Plum] (-0.125, -4.0) rectangle(0.125, -4.5);
// \node[color=purple, left=0.2cm] at (0.0, -4.25) {Update model};
// \end{scope}
//
// \foreach \x in {0,...,5}
// {
// \draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2) rectangle (0.125, -\x/2-0.5);
// }
// \node[color=Goldenrod, left=0.2cm] at (0.0, -1.5) {Render};
//
// \foreach \x in {0,...,7}
// {
// \draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-3.5) rectangle (0.125, -\x/2-4);
// \draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-8) rectangle (0.125, -\x/2-8.5);
// \draw[color=Goldenrod, fill=LemonChiffon] (-0.125, -\x/2-12.5) rectangle (0.125, -\x/2-13);
// }
// \node[color=Goldenrod, left=0.2cm] at (0.0, -5.5) {Render};
// \node[color=Goldenrod, left=0.2cm] at (0.0, -10) {Render};
// \node[color=Goldenrod, left=0.2cm] at (0.0, -14.5) {Render};
//
// \end{tikzpicture}
//
// \caption{Repartition of the tasks on the main script and the worker\label{d3:sequence}}
//
// \end{figure}
2023-05-26 14:59:56 +02:00
=== Rust client<d3i:rust-implementation>
2023-05-15 09:34:10 +02:00
However, a web client is not sufficient to analyse our streaming policies: many tasks are performed (such as rendering, and managing the interaction) and all this overhead pollutes the analysis of our policies.
This is why we also implemented a client in Rust, for simulation, so we can gather precise simulated data.
Our requirements are quite different that the ones we had to deal with in our JavaScript implementation.
In this setup, we want to build a system that is the closest to our theoretical concepts.
Therefore, we do not have a full client in Rust (meaning an application to which you would give the URL to an MPD file and that would allow you to navigate in the scene while it is being downloaded).
In order to be able to run simulations, we develop the bricks of the DASH client separately: the access client and the media engine are totally isolated:
- the *simulator* takes a user trace as a parameter, it then replays the trace using specific parameters of the access client and outputs a file containing the history of the simulation (which files have been downloaded, and when);
- the *renderer* takes the user trace as well as the history generated by the simulator as parameters, and renders images that correspond to what would have been seen.
When simulating experiments, we run the simulator on many traces that we collected during user-studies, and we then run the renderer program according to the traces to generate images corresponding to the simulation.
We are then able to compute PSNR between those frames and the ground truth frames.
Doing so guarantees us that our simulator is not affected by the performances of our renderer.