I'm a nazi 😢

This commit is contained in:
Thomas Forgione 2019-10-19 16:51:12 +02:00
parent 692f5deed9
commit a18f259fa2
No known key found for this signature in database
GPG Key ID: BFD17A2D71B3B5E7
1 changed files with 8 additions and 9 deletions

View File

@ -4,7 +4,7 @@
\subsection{Preliminary user study}
Before conducting the user study on mobile devices, we designed a preliminary user study for desktop devices.
This experiment was conducted on twelve users, using the 3D model described in the previous chapter (i.e. the Marina Bay district in Singapore).
This experiment was conducted on twelve users, using the 3D model described in the previous chapter (i.e.\ the Marina Bay district in Singapore).
Bookmarks were sampled from the set of locations of user-uploaded panoramic pictures available on Google Maps, and the task consisted in matching real-world pictures to their virtual location on the 3D model: users were presented with an image coming from Google Street View and they were asked to find the exact same location in the 3D model.
Due to the great difficulty of the task was, as well as the relative familiarity of the users with 3D navigation, the user behaviour was biased towards navigating slowly in the scene. The users almost never clicked the bookmarks, much less as they did during the experiment we ran in Chapter~\ref{bi}.
@ -130,8 +130,8 @@ We only proposed this user study to relatively young people to ensure they are u
yshift=-0.5cm,
]
\addplot[smooth, color=DarkGreen] table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-2.dat};
\addplot[smooth, color=blue] table [dashed, y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-3.dat};
\addplot[smooth, color=blue] table [y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-2.dat};
\addplot[smooth, color=DarkGreen] table [dashed, y=y, x=x]{assets/system-bookmarks/final-results/second-experiment-3.dat};
\end{axis}
@ -152,7 +152,7 @@ We also observe that the gyrocope-based interaction to rotate the camera tends t
\subsubsection{Quantitative results}
Among the 18 participants of this user study, the answers given by users at the end of the \textbf{streaming} part of the experiment were as follows: 10 indicated that they preferred the optimized policy, 4 preferred the greedy policy, and 4 did not perceive the difference.
Among the 18 participants of this user study, the answers given by users at the end of the \textbf{streaming} part of the experiment were as follows: 10 indicated that they preferred the optimized policy, 4 preferred the greedy policy, and 4 did not perceive the difference.
One should note that the difference between the two policies can be described in the following terms. The greedy policy tends to favor the largest geometry segments and as a result, the scene structure tends to appear a little bit faster with this method. On the other hand, because it explicitly uses PSNR as an objective function, the optimized policy may result in downloading important textures (that appear large on the screen) before some mid-size geometry segments (that, for example, are typically far from the camera). Some of the users managed to precisely describe these differences.
@ -160,7 +160,7 @@ One should note that the difference between the two policies can be described in
Figure~\ref{sb:psnr-second-experiment} shows the evolution of the PSNR along time during the second experiment (bookmark guided tour), averaged over all users.
Below the PSNR curve is a curve that shows how many users were moving to or staying at a bookmark position at each point in time.
As we can see, the two policies have a similar performance at the beginning when very few users have clicked bookmarks.
This changes after 10 seconds, when most users have started clicking on bookmarks. A performance gap grows and the optimized policy performs better than the greedy policy. It is natural to observe such performance, as it reflects the fact that the optimized policy makes better decisions in terms of PSNR (as previously shown in Figure~\ref{sb:precomputation}. This probably explains the previous result in which users tend to prefer the optimized policy.
This changes after 10 seconds, when most users have started clicking on bookmarks. A performance gap grows and the optimized policy performs better than the greedy policy. It is natural to observe such performance, as it reflects the fact that the optimized policy makes better decisions in terms of PSNR (as previously shown in Figure~\ref{sb:precomputation}). This probably explains the previous result in which users tend to prefer the optimized policy.
Figure~\ref{sb:psnr-second-experiment-after-click} shows the PSNR evolution after a click on a bookmark, averaged over all users and all clicks on bookmarks.
To compute these curves, we isolated the ten seconds after each click on a bookmark that occurs and we averaged them all.
@ -168,8 +168,8 @@ These curves isolate the effect of our optimized policy, and shows the differenc
Figures~\ref{sb:psnr-third-experiment} and~\ref{sb:psnr-third-experiment-after-click} represent the same curves on the third experiment (free navigation).
On average, the difference in terms of PSNR is less obvious, and both strategies seem to perform the same way at least in the first 50 seconds of the experiment. The optimized policy performs slightly better than the greedy policy in the end, which can be correlated with a peak in bookmark use occuring around the 50th second.
Figure~\ref{sb:psnr-third-experiment-after-click} also shows an interesting effect: the optimized policy still performs way better after a click on a bookmark, but the two curves converge to the same PSNR value after 9 seconds. This is largely task-dependent: users are encouraged to observe the scene in experiment 2, while they are encouraged to visit as much of the scene as possible in experiment 3. In average, users therefore tend to stay less long at a bookmarked point of view in the third experiment than in the second.
On average, the difference in terms of PSNR is less obvious, and both strategies seem to perform the same way at least in the first 50 seconds of the experiment. The optimized policy performs slightly better than the greedy policy in the end, which can be correlated with a peak in bookmark use occuring around the 50th second.
Figure~\ref{sb:psnr-third-experiment-after-click} also shows an interesting effect: the optimized policy still performs way better after a click on a bookmark, but the two curves converge to the same PSNR value after 9 seconds. This is largely task-dependent: users are encouraged to observe the scene in experiment 2, while they are encouraged to visit as much of the scene as possible in experiment 3. In average, users therefore tend to stay less long at a bookmarked point of view in the third experiment than in the second.
The most interesting fact is that on the last part of the experiment (the free navigation), the average number of clicks on bookmarks is 3 for users having the greedy policy, and 5.3 for users having the optimized policy.
@ -182,8 +182,7 @@ The p-value for statistical significance of this observed difference is 0.06 whi
\midrule Greedy & 4 & 1 & 1 & 1 & 3 & 3 & 1 & 7 & 6 & \textbf{3}\\
Bookmark & 3 & 5 & 2 & 5 & 10 & 7& 6& 4& 6 & \textbf{5.33}\\ \bottomrule
\end{tabular}
\caption{Number of click on bookmarks on the last experiment}
\label{sb:table-bookmark-clicks}
\caption{Number of click on bookmarks on the last experiment\label{sb:table-bookmark-clicks}}
\end{table}
Table~\ref{sb:table-bookmark-clicks} illustrates the number of bookmark clicks for each user (note that distinct users did this experiment on greedy and optimized policy). As we can see, all users clicked at least once on a bookmark in this experiment, regardless of the policy they experiences. However, in the greedy policy setup, 4 users clicked only one bookmark whereas in the optimized policy setup, only one user clicked less than three bookmarks.