- last edits on chapter 5

This commit is contained in:
acarlier 2019-10-19 16:26:24 +02:00
parent 08f368490a
commit b418aa451b
2 changed files with 67 additions and 60 deletions

View File

@ -1,9 +1,9 @@
\fresh{}
\section{Conclusion}
In this chapter, our objective was to propose a mobile interface for DASH-3D and to integrate back the interaction aspects that we developed in Chapter~\ref{bi}.
We have seen that doing so is not trivial, and many improvements have been made.
For aesthetics and performance reasons, the UI of the bookmarks has changed.
We developed an algorithm that computes offline a better order of segments from a certain viewpoint than what a greedy policy would do..
We encoded this optimal order in a JSON file and we modified our MPD in order to give metadata about bookmarks to the client and we modified our client to benefit from this.
%We have seen that doing so is not trivial, and many improvements have been made.
For aesthetics and performance reasons, the UI of the bookmarks has changed, and new interactions were proposed for free navigation in the 3D scene.
We developed an algorithm that computes offline a better order of segments from a certain viewpoint than what a greedy policy would do.
We encoded this optimal order in a JSON file and we modified our MPD in order to give metadata about bookmarks to the client, as well as modified our client implementation to benefit from this.
We then conducted a user study on 18 participants where users had to navigate in scenes with bookmarks and using various streaming policies.
The results seem to indicate that users prefer the optimized version of the policy, which is coherent with the PSNR values that we computed.
The results indicate that users prefer the optimized version of the policy, which is coherent with the PSNR values that we computed. The results also show that users who enjoy an optimized policy tend to use the bookmarks more.

View File

@ -88,49 +88,6 @@ We only proposed this user study to relatively young people to ensure they are u
\subsection{Results}
\subsubsection{Qualitative results --- Interaction}
People use and enjoy using the bookmarks.
It helps them navigating in the scene, and the few people that do not like bookmarks most often have the following reasons:
\begin{itemize}
\item they are already really comfortable with using the virtual joystick
\item they find using the virtual joystick funnier to use
\end{itemize}
We could argue that they do not like the bookmarks because they make the task too easy, and thus, less fun.
\subsubsection{Qualitative results --- Streaming}
Among the 18 participants of this user study, 10 confirmed that they preferred the optimized policy, 4 preferred the greedy policy, and 4 did not perceive the difference.
Another interesting fact is that on the last part of the experiment (the free navigation), the average number of clicks on bookmarks is 3 for users having the greedy policy, and 5.3 for users having the optimized policy.
Even though statistical significance is not reached, this result seems to indicate that a policy optimized for bookmarks could lead users to click more on bookmarks.
\subsubsection{Quantitative results}
By collecting all the traces during the experiments, we are able to replay the rendering and evaluate the PSNR that users got during their experiment.
Figure~\ref{sb:psnr-second-experiment} shows the average PSNR that user got while navigating during the second experiment (bookmark path).
Below the PSNR curve is a curve that shows how many users were moving to or staying at a bookmark position.
As we can see, the two policies perform in the same way in the beginning when few users are moving to a bookmarks.
However, when they start clicking on bookmarks, the gap grows and our optimized policy performs better.
Figure~\ref{sb:psnr-second-experiment-after-click} shows the PSNR after a click on a bookmark.
To compute these curves, we isolated the ten seconds after each click on a bookmark that occurs and we averaged them all.
These curves isolate the effect of our optimized policy, and shows the difference a user can feel when clicking on a bookmark.
Figures~\ref{sb:psnr-third-experiment} and~\ref{sb:psnr-third-experiment-after-click} represent the same curves on the third experiment (free navigation).
On average, the difference in terms of PSNR is less obvious, and both strategies seem to perform the same way.
This may be due to the lower number of users clicking on bookmarks.
However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimized policy performs way better after a click on a bookmark.
\begin{table}[th]
\centering
\begin{tabular}{ccccccccccc}
\toprule \textbf{Policy} & \multicolumn{9}{c}{\textbf{Number of clicks}} & \textbf{Average} \\
\midrule Greedy & 4 & 1 & 1 & 1 & 3 & 3 & 1 & 7 & 6 & \textbf{3}\\
Bookmark & 3 & 5 & 2 & 5 & 10 & 7& 6& 4& 6 & \textbf{5.33}\\ \bottomrule
\end{tabular}
\caption{Number of click on bookmarks on the last experiment}
\end{table}
\begin{figure}[th]
\centering
@ -150,9 +107,9 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
]
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-0.dat};
\addlegendentry{Greedy}
\addlegendentry{Greedy policy}
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-1.dat};
\addlegendentry{Greedy optimized for bookmarks}
\addlegendentry{Optimized policy}
\end{axis}
@ -182,6 +139,56 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
\caption{Comparison of the PSNR during the second experiment: above, PSNR for greedy and greedy optimized for bookmarks; below, ratio of people clicking on a bookmark.\label{sb:psnr-second-experiment}}
\end{figure}
\subsubsection{Qualitative results and observations}
We were able to draw several qualitative observations while users were interacting. First, people tend to use and enjoy using the bookmarks, mostly because it helps them navigating in the scene. For the few people who verbalized they do not want to use bookmarks, most often one of the following reasons was invoked:
\begin{itemize}
\item they are comfortable enough with using the virtual joystick;
\item they find the virtual joystick funnier to use.
\end{itemize}
We also observe that the gyrocope-based interaction to rotate the camera tends to be either used a lot, either never used: we will not focus on this particular phenomenom as it is out of scope of this study, but it would make an interesting Computer-Human Interaction study.
\subsubsection{Quantitative results}
Among the 18 participants of this user study, the answers given by users at the end of the \textbf{streaming} part of the experiment were as follows: 10 indicated that they preferred the optimized policy, 4 preferred the greedy policy, and 4 did not perceive the difference.
One should note that the difference between the two policies can be described in the following terms. The greedy policy tends to favor the largest geometry segments and as a result, the scene structure tends to appear a little bit faster with this method. On the other hand, because it explicitly uses PSNR as an objective function, the optimized policy may result in downloading important textures (that appear large on the screen) before some mid-size geometry segments (that, for example, are typically far from the camera). Some of the users managed to precisely describe these differences.
Figure~\ref{sb:psnr-second-experiment} shows the evolution of the PSNR along time during the second experiment (bookmark guided tour), averaged over all users.
Below the PSNR curve is a curve that shows how many users were moving to or staying at a bookmark position at each point in time.
As we can see, the two policies have a similar performance at the beginning when very few users have clicked bookmarks.
This changes after 10 seconds, when most users have started clicking on bookmarks. A performance gap grows and the optimized policy performs better than the greedy policy. It is natural to observe such performance, as it reflects the fact that the optimized policy makes better decisions in terms of PSNR (as previously shown in Figure~\ref{sb:precomputation}. This probably explains the previous result in which users tend to prefer the optimized policy.
Figure~\ref{sb:psnr-second-experiment-after-click} shows the PSNR evolution after a click on a bookmark, averaged over all users and all clicks on bookmarks.
To compute these curves, we isolated the ten seconds after each click on a bookmark that occurs and we averaged them all.
These curves isolate the effect of our optimized policy, and shows the difference a user can feel when clicking on a bookmark.
Figures~\ref{sb:psnr-third-experiment} and~\ref{sb:psnr-third-experiment-after-click} represent the same curves on the third experiment (free navigation).
On average, the difference in terms of PSNR is less obvious, and both strategies seem to perform the same way at least in the first 50 seconds of the experiment. The optimized policy performs slightly better than the greedy policy in the end, which can be correlated with a peak in bookmark use occuring around the 50th second.
Figure~\ref{sb:psnr-third-experiment-after-click} also shows an interesting effect: the optimized policy still performs way better after a click on a bookmark, but the two curves converge to the same PSNR value after 9 seconds. This is largely task-dependent: users are encouraged to observe the scene in experiment 2, while they are encouraged to visit as much of the scene as possible in experiment 3. In average, users therefore tend to stay less long at a bookmarked point of view in the third experiment than in the second.
The most interesting fact is that on the last part of the experiment (the free navigation), the average number of clicks on bookmarks is 3 for users having the greedy policy, and 5.3 for users having the optimized policy.
The p-value for statistical significance of this observed difference is 0.06 which is almost low enough to reach the conclusion that a policy optimized for bookmarks could lead users to click on bookmarks more.
\begin{table}[th]
\centering
\begin{tabular}{ccccccccccc}
\toprule \textbf{Policy} & \multicolumn{9}{c}{\textbf{Number of clicks}} & \textbf{Average} \\
\midrule Greedy & 4 & 1 & 1 & 1 & 3 & 3 & 1 & 7 & 6 & \textbf{3}\\
Bookmark & 3 & 5 & 2 & 5 & 10 & 7& 6& 4& 6 & \textbf{5.33}\\ \bottomrule
\end{tabular}
\caption{Number of click on bookmarks on the last experiment}
\label{sb:table-bookmark-clicks}
\end{table}
Table~\ref{sb:table-bookmark-clicks} illustrates the number of bookmark clicks for each user (note that distinct users did this experiment on greedy and optimized policy). As we can see, all users clicked at least once on a bookmark in this experiment, regardless of the policy they experiences. However, in the greedy policy setup, 4 users clicked only one bookmark whereas in the optimized policy setup, only one user clicked less than three bookmarks.
Everything happens as if users ere encouraged to click on bookmarks with the optimized policy, or that at least some users were discouraged to click on bookmarks with the greedy policy.
\begin{figure}[th]
\centering
\begin{tikzpicture}
@ -198,9 +205,9 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
]
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-after-clicks-0.dat};
\addlegendentry{Greedy}
\addlegendentry{Greedy policy}
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-after-clicks-1.dat};
\addlegendentry{Greedy optimized for bookmarks}
\addlegendentry{Optimized policy}
\end{axis}
\end{tikzpicture}
@ -218,16 +225,16 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
cycle list name=mystyle,
legend pos=south east,
xmin=0,
xmax=60,
xmax=100,
ymin=0,
name=first plot,
xmajorticks=false,
]
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-0.dat};
\addlegendentry{Greedy}
\addlegendentry{Greedy policy}
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-1.dat};
\addlegendentry{Greedy optimized for bookmarks}
\addlegendentry{Optimized policy}
\end{axis}
@ -240,7 +247,7 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
cycle list name=mystyle,
legend pos=south east,
xmin=0,
xmax=60,
xmax=100,
ymin=0,
ymax=1,
at=(first plot.south),
@ -248,8 +255,8 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
yshift=-0.5cm,
]
\addplot[smooth, color=DarkGreen] table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-2.dat};
\addplot[smooth, color=blue] table [dashed, y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-3.dat};
\addplot[smooth, color=blue] table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-2.dat};
\addplot[smooth, color=DarkGreen] table [dashed, y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-3.dat};
\end{axis}
@ -273,9 +280,9 @@ However, Figure~\ref{sb:psnr-third-experiment-after-click} is clear: the optimiz
]
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-after-clicks-0.dat};
\addlegendentry{Greedy}
\addlegendentry{Greedy policy}
\addplot table [y=y, x=x]{assets/system-bookmarks/final-results/third-experiment-after-clicks-1.dat};
\addlegendentry{Greedy optimized for bookmarks}
\addlegendentry{Optimized policy}
\end{axis}
\end{tikzpicture}