Some stuff

This commit is contained in:
Thomas Forgione 2019-10-08 17:26:37 +02:00
parent 3006fd5f98
commit 4c15b06d95
No known key found for this signature in database
GPG Key ID: 203DAEA747F48F41
5 changed files with 31 additions and 27 deletions

View File

@ -43,7 +43,7 @@ This version was compiled on \today{} at \currenttime{}.
\node at (current page.center) {\includegraphics[width=\pagewidth]{assets/background.png}};
\node at (12, -22) [%
inner sep=15pt,
thin,
ultra thick,
draw=MidnightBlue,
fill=black,
font=\sffamily\bfseries\Huge,

View File

@ -1,6 +1,6 @@
\fresh{}
\section{Desktop and mobile interactions}
\section{Desktop and mobile interactions}\label{sb:interaction}
\subsection{Desktop interaction}
@ -41,17 +41,17 @@ In Chapter~\ref{bi} Section~\ref{bi:3d-bookmarks}, we described two 3D widgets t
One of the conclusions of the user-study, described in Section~\ref{bi:user-study}, was that the impact of the way we display bookmark was not significant.
In this work, we chose a slightly different way of representing bookmarks due to some concerns with our original representations:
\begin{itemize}
\item viewport bookmarks are simple, but people who are not in the field of vision are not familiar with this representation;
\item arrow bookmarks are complex, and need to be refreshed when the camera moves, which can harm the framerate of the rendering.
\item viewport bookmarks are simple, but people who are not computer vision scientists are not familiar with this representation;
\item arrow bookmarks are complex, and need to be regenerated when the camera moves, which can harm the framerate of the rendering.
\end{itemize}
For these reasons, we changed the display to a vertical bar with a 2D sprite of a pictorial representation of an eye.
This 2D sprite is always facing the camera to prevent it from being invisible when the camera would be on the side of it.
Screenshots of user interfaces with bookmarks are available in Figures~\ref{sb:desktop} and~\ref{sb:mobile}.
The size of the sprite changes when time goes by following a sine to help the user distinguish what is part of the scene and what is extra widgets.
Since our scene is static, a user knows when seeing a bookmark that it is not part of the scene.
The size of the sprite changes when time goes by following a sine function to help the user distinguish what is part of the scene and what is extra widgets.
Since our scene is static, a user knows that a changing object is not part of the scene, but part of the UI\@.
The other parameters of the bookmarks remain unchanged since Chapter~\ref{bi}: in order to avoid users to lose context, clicking on a bookmark triggers an automatic, smooth, camera displacement that ends up at the bookmark.
The other bookmark parameters remain unchanged since Chapter~\ref{bi}: in order to avoid users to lose context, clicking on a bookmark triggers an automatic, smooth, camera displacement that ends up at the bookmark.
We also display a thumbnail of the bookmark's viewpoint when the mouse hovers over a bookmark.
Such thumbnail is displayed in Figure~\ref{sb:desktop}.
Note that since on mobile, there is no mouse and thus no pointer, thumbnails are never downloaded nor displayed.
@ -77,11 +77,10 @@ As such, bookmarks can be used as a way to optimize streaming by downloading seg
More specifically, segment utility as introduced in Section~\ref{d3:utility} is only an approximation of the segment's true contribution to the current viewpoint rendering.
When bookmarks are defined, it is possible to obtain a perfect measure of segment utility by performing an offline rendering at each bookmark's viewpoint.
% Then, by simply counting the number of pixels that are rendered using each segment, we can rank the segments by order of importance in the rendering.
We define $\mathcal{U}^{*} (s,B_i)$ as being the true utility of a segment $s$ in a viewpoint defined at bookmark $B_i$.
\fresh{}
Algorithm~\ref{sb:algo-optimal-order} sorts segments according to their true utility.
In order to know the true utility of a segment, we developed Algorithm~\ref{sb:algo-optimal-order}, that sorts segments according to their true utility.
It takes as input the considered viewpoint, the ground truth from this viewpoint and the set of segments to sort.
It starts with an empty model, and tries all the segments from the set of candidates, and computes the PSNR between the corresponding render and the ground truth render.
With all those PSNRs, it is able to determine which segment is the one that brings the best $\Delta\text{PSNR} / s$, $s$ being the size of the segment in bytes.
@ -145,8 +144,8 @@ This order is then saved as a JSON file that a client can download it to know wh
\caption{Computation of the optimal order of segments from a bookmark\label{sb:algo-optimal-order}}
\end{algorithm}
Sorting all the segments from the model would be an excessively time consuming computations.
To speed up the algorithm, we only sort the 200 best segments, and we choose these segments among a filtered set of candidates.
Sorting all the segments from the model would be an excessively time consuming computation.
To speed up this algorithm, we only sort the 200 best segments, and we choose these segments among a filtered set of candidates.
To find those candidates, we reuse the ideas developed in Chapter~\ref{bi}.
We render the ``pixel to geometry segment'' and ``pixel to texture'' maps, as shown in Figure~\ref{sb:bookmarks-utility}.
These renderings allow us to know what geometry segment and what texture correspond to each pixel, and filter out useless candidates.
@ -160,7 +159,7 @@ These renderings allow us to know what geometry segment and what texture corresp
\end{figure}
Figure~\ref{sb:precomputation} shows how this precomputation improves the quality of rendering.
Each curve represents the PSNR one can obtain by downloading a certain amount of data.
Each curve represents the PSNR one can obtain by downloading a certain amount of data, and they show that, for the same amount of data downloaded, the optimized order reaches a higher PSNR than the greedy order, which means that its utility metric is more accurate.
\begin{figure}[th]
\centering
@ -191,11 +190,11 @@ Each curve represents the PSNR one can obtain by downloading a certain amount of
\copied{}
We now present how to introduce bookmarks information in the Media Presentation Description (MPD) file, to be used in a DASH framework.
Bookmarks are fully defined by a viewport description, and the additional content needed to properly render and use a bookmark in a system consists in three images: a thumbnail of the point of view at the bookmark, along with the JSON file giving the optimal segment order for this viewpoint.
For this reason, we create a separate adaptation set in the MPD\@.
The bookmarked viewport information is stored as a supplemental property.
Bookmarks adaptation set only contain one representation, composed of two segments: the thumbnail used for the desktop and the JSON file.
We now present how to include bookmarks information in the Media Presentation Description (MPD) file
Bookmarks are fully defined by a position, a direction, and the additional content needed to properly render and use a bookmark in a system consists in three images: a thumbnail of the point of view at the bookmark, along with the JSON file giving the optimal segment order for this viewpoint.
For this reason, for each bookmark, we create a separate adaptation set in the MPD\@.
The bookmarked viewpoint information is stored as a supplemental property.
Bookmarks adaptation set only contain one representation, composed of two segments: the thumbnail used as a preview for the desktop interface and the JSON file.
\begin{figure}[th]
\lstinputlisting[%

View File

@ -6,4 +6,4 @@ For aesthetics and performance reasons, the UI of the bookmarks have been change
We developed an algorithm that computes offline the optimal order of segments from a certain viewpoint.
We encoded this optimal order in a JSON file and we modified our MPD in order to give metadata about bookmarks to the client and we modified our client to benefit from this.
We then conducted a user study on 18 participants where users had to navigate in scenes with bookmarks and using various streaming policies.
The results seem to indicate that users prefer the optimized version of the policy, which is coherent with the PSNR values that we computed.\todo{this conclusion is real real bad}
The results seem to indicate that users prefer the optimized version of the policy, which is coherent with the PSNR values that we computed.\todo{this conclusion sucks ass}

View File

@ -11,3 +11,7 @@ In order to do so, we have chosen two angles of attack:
\item we design an interface allowing to navigate in a 3D scene for both desktop and mobile devices;
\item we improve and adapt the bookmarks described in Chapter~\ref{bi} to the context of DASH-3D and to mobile interaction.
\end{itemize}
In Section~\ref{sb:interaction}, we present the different choices we made for the interfaces, and we describe the new mobile interface.
In Section~\ref{sb:bookmarks}, we describe how we embed the bookmarks into our DASH framework, and how we precompute data in order to improve the quality of experience of the user.
In Section~\ref{sb:evaluation}, we describe the user study we conducted, the data we collected and we analyse this data.

View File

@ -1,8 +1,8 @@
\fresh{}
\section{Evaluation}
\section{Evaluation}\label{sb:evaluation}
To evaluate the impact of the modifications made in the previous section, we conducted a user study to collect traces.
To ensure that the users will use bookmarks, which is required to evaluate the improvements brought by our new policy, we decide to conduct this experiment exclusively on mobile devices.
To evaluate the impact of the modifications made in the previous section, we conducted a user study.
Since we already conduct a user study on desktop devices, we decide to conduct this experiment exclusively on mobile devices.
\subsection{Setup}
@ -22,20 +22,21 @@ The experiment consists in 4 phases: a tutorial, a comparison between interfaces
The experiment starts with a tutorial, so the users can get accustomed to our interface.
This tutorial shows the different types of interactions available and explains how to use them.
It then presents bookmarks to the users.
\paragraph{Bookmark path}
\paragraph{Bookmark}
This part of the experiment consists in two 1 minute long sessions: the first one has a naked interface where the only available intarctions are translations and rotations of the camera, and the second one enhances the interface with bookmarks.
This part of the experiment consists in two 1 minute long sessions: the first one has a naked interface where the only available interactions are translations and rotations of the camera, and the second one augments the interface with bookmarks.
There are no special tasks other than to take a walk around the model.
This part ends with a small questionnaire where users are asked whether they prefer navigating with bookmarks, and they can use a text field to describe their reasons.
The main objective of this part of the experiment is not really to know whether people like using the bookmarks or not: we already know from our previous work and from the other parts of this experiment that they do like using the bookmarks.
This part most importantly acts as an extended tutorial: the first half trains the user with the controls, and the second half trains them with the bookmarks, and this is why we decided not to randomize those two halves.
This part most importantly acts as an extended tutorial: the first half trains the users with the controls, and the second half trains them with the bookmarks, and this is why we decided not to randomize those two halves.
\paragraph{Streaming}
This part of the experiment also consists in two 1 minute long sessions that use different streaming policies.
One of those experiment has the default greedy policy described in \todo{add ref}, and the other one has the enhanced policy for bookmarks.
One of those experiment has the default greedy policy described in~\ref{d3:dash-adaptation}, and the other one has the enhanced policy for bookmarks.
The order of those two sessions is randomized to avoid biases.
Since we know that the difference between our streaming policies are subtle, we designed a task a little more complex in order to highlight the differences so that the user can see it.
@ -56,7 +57,7 @@ The loading policy is the default greedy policy for half of the users, and the e
During these experiments, we need a server and a client.
The server is hosted on an Acer Aspire V3 with an Intel Core i7 3632QM processor.
The user is given a Samsung Galaxy S5 that is connected to the server via Wi-fi.
The user is given a One Plus 5 that is connected to the server via Wi-fi.
There is no artificial bandwidth limitation due to the fact that the bandwidth is already limited by the Wi-fi network and by the performances of the mobile device.
\subsection{Results}
@ -83,7 +84,7 @@ Even though statistical significance is not reached, this result seems to indica
\subsubsection{Quantitative results}
By collecting all the traces during the experiments, we are able to replay the rendering and evaluate the PSNR that users saw during their experiment.
By collecting all the traces during the experiments, we are able to replay the rendering and evaluate the PSNR that users got during their experiment.
Figure~\ref{sb:psnr-second-experiment} shows the average PSNR that user got while navigating during the second experiment (bookmark path).
Below the PSNR curve is a curve that shows how many users were moving to or staying at a bookmark position.
As we can see, the two policies perform in the same way in the beginning when few users are moving to a bookmarks.