Removes the update command

This commit is contained in:
Thomas Forgione 2020-01-20 14:38:04 +01:00
parent b829d08e02
commit a10af54bc4
10 changed files with 26 additions and 32 deletions

View File

@ -14,8 +14,5 @@
\newcommand\notsotiny{\@setfontsize\notsotiny\@viiipt\@ixpt}
% Commands for review
\newcommand{\update}[2]{#2}
\newcommand{\cmark}{{\color{DarkGreen}\ding{51}}}
\newcommand{\xmark}{{\color{red}\ding{55}}}

View File

@ -96,8 +96,6 @@ These parameters are stored in the MPD file.
First, for each geometry segment $s^G$ there is a predetermined 3D area $\mathcal{A}_{3D}(s^G)$, equal to the sum of all triangle areas in this segment (in 3D); it is computed as the segments are created.
Note that the texture segments have similar information, but computed at \textit{navigation time} $t_i$.
The second information stored in the MPD for all segments, geometry, and texture, is the size of the segment (in kB).
\update{Indeed, geometry segments have a similar number of faces; their size is almost uniform.
For texture segments, the size is usually much smaller than the geometry segments but also varies a lot, as between two successive resolutions the number of pixels is divided by 4.}{}
Finally, for each texture segment $s^{T}$, the MPD stores the \textit{MSE} (mean square error) of the image and resolution, relative to the highest resolution (by default, triangles are filled with its average color).
Offline parameters are stored in the MPD as shown in Listing~\ref{d3:mpd}.
@ -343,13 +341,13 @@ The \texttt{DashLoader} class accepts as parameter a function that will be calle
\subsubsection{Performance}
\update{In JavaScript, there is no way of doing parallel computing without using \emph{web workers}.}{Javascript requires the use of \emph{web workers} to perform parallel computing.}
Javascript requires the use of \emph{web workers} to perform parallel computing.
A web worker is a script in JavaScript that runs in the background, on a separate thread and that can communicate with the main script by sending and receiving messages.
Since our system has many tasks to perform, it is natural to use workers to manage the streaming without impacting the framerate of the renderer.
However, what a worker can do is very limited, since it cannot access the variables of the main script.
Because of this, we are forced to run the renderer on the main script, where it can access the HTML page, and we move all the other tasks (i.e.\ the access client, the control engine and the segment parsers) to the worker.
Since the main script is the only thread communicating with the GPU, it will still have to update the model with the parsed content it receives from the worker.
\update{Using a worker does not so much improve the framerate of the system, but it reduces}{We do not use web workers to improve the framerate of the system, but to reduce} the latency that occurs when receiving a new segment, which can be frustrating since in a single thread scenario, each time a segment is received, the interface freezes for around half a second.
We do not use web workers to improve the framerate of the system, but to reduce the latency that occurs when receiving a new segment, which can be frustrating since in a single thread scenario, each time a segment is received, the interface freezes for around half a second.
A sequence diagram of what happens when downloading, parsing and rendering content is shown in Figure~\ref{d3:sequence}.
\begin{figure}[ht]

View File

@ -198,7 +198,7 @@ In the first 30 sec, since there are relatively few 3D contents downloaded, maki
Table~\ref{d3:percentages} shows the distribution of texture resolutions that are downloaded by greedy and our Proposed scheme, at different bandwidths.
Resolution 5 is the highest and 1 is the lowest.
The table shows a weakness of the greedy policy: \update{as the bandwidth increases, the distribution of downloaded textures resolution stays more or less the same.}{the distributioon of downloaded textures does not adapt to the bandwidth.}
The table shows a weakness of the greedy policy: the distributioon of downloaded textures does not adapt to the bandwidth.
In contrast, our proposed streaming policy adapts to an increasing bandwidth by downloading higher resolution textures (13.9\% at 10 Mbps, vs. 0.3\% at 2.5 Mbps).
In fact, an interesting feature of our proposed streaming policy is that it adapts the geometry-texture compromise to the bandwidth. The textures represent 57.3\% of the total amount of downloaded bytes at 2.5 Mbps, and 70.2\% at 10 Mbps.
In other words, our system tends to favor geometry segments when the bandwidth is low, and favor texture segments when the bandwidth increases.

View File

@ -1,4 +1,4 @@
A 3D streaming system is a system that \update{dynamically}{progressively} collects 3D data.
A 3D streaming system is a system that progressively collects 3D data.
The previous chapter voluntarily remained vague about what \emph{3D data} actually is.
This chapter presents in detail the 3D data we consider and how it is rendered.
We also give insights about interaction and streaming by comparing the 3D case to the video one.
@ -17,13 +17,13 @@ Such a model can typically contain the following:
\item \textbf{Normals}, which are 3D vectors that can give information about light behaviour on a face.
\end{itemize}
The Wavefront OBJ is \update{one of the most popular}{a} format that describes all these elements in text format.
The Wavefront OBJ is a format that describes all these elements in text format.
A 3D model encoded in the OBJ format typically consists in two files: the materials file (\texttt{.mtl}) and the object file (\texttt{.obj}).
\paragraph{}
The materials file declares all the materials that the object file will reference.
A material consists in name, and other photometric properties such as ambient, diffuse and specular colors, as well as texture maps, \update{}{which are images that are painted on faces}.
Each face corresponds to a material \update{and a renderer can use the material's information to render the faces}{}.
A material consists in name, and other photometric properties such as ambient, diffuse and specular colors, as well as texture maps, which are images that are painted on faces.
Each face corresponds to a material.
A simple material file is visible on Listing~\ref{i:mtl}.
\paragraph{}

View File

@ -11,11 +11,11 @@ When it comes to 3D streaming systems, we need two kind of software.
\subsection{JavaScript}
\paragraph{THREE.js.}
On the web browser, \update{the best way to perform 3D rendering is to use WebGL}{it is now possible to perform 3D rendering by using WebGL}.
On the web browser, it is now possible to perform 3D rendering by using WebGL\@.
However, WebGL is very low level and it can be painful to write code, even to render a simple triangle.
For example, \href{https://www.tutorialspoint.com/webgl/webgl_drawing_a_triangle.htm}{this tutorial}'s code contains 121 lines of javascript, 46 being code (not comments or empty lines) to render a simple, non-textured triangle.
For this reason, it seems unreasonable to build a system like the one we are describing in raw WebGL\@.
There are many libraires that wrap WebGL code and that help people building 3D interfaces, and \href{https://threejs.org}{THREE.js} \update{is probably one of the most popular}{is a very popular one (56617 stars on github, making it the 35th most starred repository on GitHub as of November 26th, 2019\footnote{\url{https://web.archive.org/web/20191126151645/https://gitstar-ranking.com/mrdoob/three.js}})}.
There are many libraires that wrap WebGL code and that help people building 3D interfaces, and \href{https://threejs.org}{THREE.js} is a very popular one (56617 stars on github, making it the 35th most starred repository on GitHub as of November 26th, 2019\footnote{\url{https://web.archive.org/web/20191126151645/https://gitstar-ranking.com/mrdoob/three.js}}).
THREE.js acts as a 3D engine built on WebGL\@.
It provides classes to deal with everything we need:
\begin{itemize}
@ -38,7 +38,7 @@ A snippet of the basic usage of these classes is given in Listing~\ref{f:three-h
\paragraph{Geometries.\label{f:geometries}}
Geometries are the classes that hold the vertices, texture coordinates, normals and faces.
\update{There are two most important geometry classes in THREE.js:}{THREE.js proposes two classes for handling geometries:}
THREE.js proposes two classes for handling geometries:
\begin{itemize}
\item the \textbf{Geometry} class, which is made to be developer friendly and allows easy editing but can suffer from performance issues;
\item the \textbf{BufferGeometry} class, which is harder to use for a developer, but allows better performance since the developer controls how data is transmitted to the GPU\@.
@ -120,9 +120,9 @@ It is probably for those reasons that Rust is the \emph{most loved programming l
\subsubsection{Tooling}
\update{Even better}{Moreover}, Rust comes with \update{great tooling}{many programs that help developers}.
Moreover, Rust comes with many programs that help developers.
\begin{itemize}
\item \href{https://github.com/rust-lang/rust}{\textbf{\texttt{rustc}}} is the Rust compiler. It is comfortable due to \update{the nice error messages it displays}{the clarity and precise explanations of its error messages}.
\item \href{https://github.com/rust-lang/rust}{\textbf{\texttt{rustc}}} is the Rust compiler. It is comfortable due to the clarity and precise explanations of its error messages.
\item \href{https://github.com/rust-lang/cargo}{\textbf{\texttt{cargo}}} is the official Rust's project and package manager. It manages compilation, dependencies, documentation, tests, etc.
\item \href{https://github.com/racer-rust/racer}{\textbf{\texttt{racer}}}, \href{https://github.com/rust-lang/rls}{\textbf{\texttt{rls}} (Rust Language Server)} and \href{https://github.com/rust-analyzer/rust-analyzer}{\textbf{\texttt{rust-analyzer}}} are software that manage automatic compilation to display errors in code editors as well as providing semantic code completion.
\item \href{https://github.com/rust-lang/rustfmt}{\textbf{\texttt{rustfmt}}} auto formats code.

View File

@ -16,7 +16,7 @@ One of the main differences between video and 3D streaming is the persistence of
In video streaming, only one second of video is required at a time.
Of course, most video streaming services prefetch some future chunks, and keep in cache some previous ones, but a minimal system could work without latency and keep in memory only two chunks: the current one and the next one.
\update{In 3D streaming, each chunk is part of a scene, and already a few problems appear here:}{Already a few problems appear here regarding 3D streaming:}
Already a few problems appear here regarding 3D streaming:
\begin{itemize}
\item depending on the user's field of view, many chunks may be required to perform a single rendering;
\item chunks do not become obsolete the way they do in video, a user navigating in a 3D scene may come back to a same spot after some time, or see the same objects but from elsewhere in the scene.
@ -25,16 +25,16 @@ Of course, most video streaming services prefetch some future chunks, and keep i
\subsection{Multiple representations}
All major video streaming platforms support multi-resolution streaming.
This means that a client can choose the \update{resolution}{quality} at which it requests the content.
This means that a client can choose the quality at which it requests the content.
It can be chosen directly by the user or automatically determined by analysing the available resources (size of the screen, downloading bandwidth, device performances)
\begin{figure}[th]
\centering
\includegraphics[width=\textwidth]{assets/introduction/youtube-multiresolution.png}
\caption{The different \update{resolutions}{qualities} available for a Youtube video}
\caption{The different qualities available for a Youtube video}
\end{figure}
Similarly, recent work in 3D streaming have proposed different ways to progressively stream 3D models, displaying a low \update{resolution}{quality version of the model} to the user without latency, and supporting interaction with the model while details are being downloaded.
Similarly, recent work in 3D streaming have proposed different ways to progressively stream 3D models, displaying a low quality version of the model to the user without latency, and supporting interaction with the model while details are being downloaded.
Such strategies are reviewed in Section~\ref{sote:3d-streaming}.
\subsection{Media types}
@ -45,7 +45,7 @@ In both cases, an algorithm for content streaming has to acknowledge those diffe
In video streaming, most of the data (in terms of bytes) is used for images.
Thus, the most important thing a video streaming system should do is to optimise images streaming.
That is why, on a video on Youtube for example, there may be 6 \update{resolutions}{available qualities} for images (144p, 240p, 320p, 480p, 720p and 1080p) but only 2 \update{resolutions}{qualities} for sound.
That is why, on a video on Youtube for example, there may be 6 available qualities for images (144p, 240p, 320p, 480p, 720p and 1080p) but only 2 qualities for sound.
This is one of the main differences between video and 3D streaming: in a 3D scene, geometry and texture sizes are approximately the same, and leveraging between those two types of content is a key problem.
\subsection{Interaction}

View File

@ -1,10 +1,10 @@
\section{Open problems\label{i:challenges}}
The objective of our work is to design a system which allows a user to access remote 3D content \update{and that guarantees both good quality of service and good quality of experience}{}.
The objective of our work is to design a system which allows a user to access remote 3D content.
A 3D streaming client has lots of tasks to accomplish:
\begin{itemize}
\item Decide what part of the \update{model}{content} to download next,
\item Decide what part of the content to download next,
\item Download the next part,
\item Parse the downloaded content,
\item Add the parsed result to the scene,
@ -19,7 +19,7 @@ This opens multiple problems which need to be considered and will be studied in
% Furthermore, for streaming, data needs to be split into chunks that are requested separately, so perparing those chunks in advance can also help the streaming.
Before streaming content, it needs to be prepared.
The segmentation of the content into chunks is particularly important for streaming since it allows transmitting only a portion of the data to the client.
\update{A partial model consisting in the downloaded content, it}{The downloaded chunks} can be rendered while \update{downloading more chunks}{more chunks are being downloaded}.
The downloaded chunks can be rendered while more chunks are being downloaded.
Content preparation also includes compression.
One of the questions this thesis has to answer is: \emph{what is the best way to prepare 3D content so that a streaming client can progressively download and render the 3D model?}
@ -27,13 +27,13 @@ One of the questions this thesis has to answer is: \emph{what is the best way to
Once our content is prepared and split in chunks, a client needs to determine which chunks should be downloaded first.
A chunk that contains data in the field of view of the user is more relevant than a chunk that is not inside; a chunk that is close to the camera is more relevant than a chunk far away from the camera.
This should also include other contextual parameters, such as the size of a chunk, the bandwidth and the user's behaviour.
\update{The most important questions we have to answer are:}{In order to propose efficient streaming policies, we need to know} \emph{how to estimate a chunk utility, and how to determine which chunks need to be downloaded depending the user's interactions?}
In order to propose efficient streaming policies, we need to know \emph{how to estimate a chunk utility, and how to determine which chunks need to be downloaded depending the user's interactions?}
\paragraph{Evaluation.}
In such systems, \update{the}{} two \update{most important}{commonly used} criteria for evaluation are quality of service, and quality of experience.
In such systems, two commonly used criteria for evaluation are quality of service, and quality of experience.
The quality of service is a network-centric metric, which considers values such as throughput and measures how well the content is served to the client.
The quality of experience is a user-centric metric: it relies on user perception and can only be measured by asking how users feel about a system.
To be able to know which streaming policies are best, one needs to know \emph{how to compare streaming policies and evaluate the impact of their parameters \update{in terms of}{on the} quality of service \update{}{of the streaming system} and \update{}{on the} quality of experience \update{}{of the final user}?}
To be able to know which streaming policies are best, one needs to know \emph{how to compare streaming policies and evaluate the impact of their parameters on the quality of service of the streaming system and on the quality of experience of the final user?}
\paragraph{Implementation.}
The objective of our work is to setup a client-server architecture that answers the above problems: content preparation, chunk utility, streaming policies.

View File

@ -2,7 +2,7 @@
During the last years, 3D acquisition and modeling techniques have made tremendous progress.
Recent software uses 2D images from cameras to reconstruct 3D data, e.g. \href{https://alicevision.org/\#meshroom}{Meshroom} is a free and open source software which got almost \numprint{200000} downloads on \href{https://www.fosshub.com/Meshroom.html}{fosshub}, which use \emph{structure-from-motion} and \emph{multi-view-stereo} to infer a 3D model.
More and more devices are specifically built to harvest 3D data: \update{some still very expensive and provide precise information such as LIDAR (Light Detection And Ranging, as in RADAR but with light instead of radio waves), while some cheaper devices can obtain coarse data such as the Kinect.}{for example, LIDAR (Light Detection And Ranging) can compute 3D distances by measuring time of flight of light. The recent research interest for autonomous vehicles allowed more companies to develop cheaper LIDARs, which increase the potential for new 3D content creation.}
More and more devices are specifically built to harvest 3D data: for example, LIDAR (Light Detection And Ranging) can compute 3D distances by measuring time of flight of light. The recent research interest for autonomous vehicles allowed more companies to develop cheaper LIDARs, which increase the potential for new 3D content creation.
Thanks to these techniques, more and more 3D data become available.
These models have potential for multiple purposes, for example, they can be printed, which can reduce the production cost of some pieces of hardware or enable the creation of new objects, but most uses are based on visualisation.
For example, they can be used for augmented reality, to provide user with feedback that can be useful to help worker with complex tasks, but also for fashion (for example, \emph{Fittingbox} is a company that develops software to virtually try glasses, as in Figure~\ref{i:fittingbox}).

View File

@ -205,4 +205,4 @@ Figure~\ref{bi:triangles-curve} shows a CDF of the percentage of 3D mesh triangl
As expected, the fact that the users can browse the scene significantly quicker with bookmarks reflects on the demand on the 3D content.
Users need more triangles more quickly, which either leads to more demand on network bandwidth, or if the bandwidth is kept constant, leads to fewer objects being displayed.
In the next section, we introduce experiments based on our user study traces that show how the rendering is affected by the presence of bookmarks and how to improve it.
\update{}{We found no significant correlation between the performance at the task and the age of the users or their skills in videogames.}
We found no significant correlation between the performance at the task and the age of the users or their skills in videogames.

View File

@ -165,8 +165,7 @@ The first part is used to fetch the content from the current viewpoint, using th
The second part is used to prefetch content from the bookmarks, according to their likelihood of being clicked next.
We use the probabilities displayed in Figure~\ref{bi:mat1} to determine the size of each part.
Each bookmark $B$ has a probability $p(B|B_{prev})$ of being clicked next, considering that $B_{prev}$ was the last clicked bookmark.
\update{We assign to each bookmark $p(B|B_{prev})/2$ of the chunk to prefetch the corresponding data.}{%
We assign to each bookmark a certain portion of the chunk to prefetch the corresponding data proportionally to the probability of it being clicked.}
We assign to each bookmark a certain portion of the chunk to prefetch the corresponding data proportionally to the probability of it being clicked.
We use the \textsf{visible} policy to determine which data should be sent for a bookmark.
We denote this combination as \textsf{V-PP}, for Prefetching based on Prediction using \textsf{visible} policy.