Thierry's fix

This commit is contained in:
2019-10-11 11:06:22 +02:00
parent beee676839
commit bd7bc84eee
19 changed files with 36 additions and 38 deletions

View File

@@ -148,7 +148,7 @@ Sorting all the segments from the model would be an excessively time consuming c
To speed up this algorithm, we only sort the 200 first best segments, and we choose these segments among a filtered set of candidates.
To find those candidates, we reuse the ideas developed in Chapter~\ref{bi}.
We render the ``pixel to geometry segment'' and ``pixel to texture'' maps, as shown in Figure~\ref{sb:bookmarks-utility}.
These renderings allow us to know what geometry segment and what texture correspond to each pixel, and filter out useless candidates.
These renderings allow us to know which geometry segment and which texture correspond to each pixel, and filter out useless candidates.
\begin{figure}[th]
\centering

View File

@@ -4,8 +4,8 @@
In Chapter~\ref{bi}, we described how it is possible to modify a user interface to ease user navigation in a 3D scene, and how the system can benefit from it.
In Chapter~\ref{d3}, we presented a streaming system that takes neither the interface nor the user interaction into account.
Hence, it is natural study how the user interaction can impact performances of DASH-3D.
In order to do so, followed these two steps:
Hence, it is natural to study how the user interaction can impact performances of DASH-3D.
In order to do so, we followed these two steps:
\begin{itemize}
\item we design an interface allowing to navigate in a 3D scene on both desktop and mobile devices;

View File

@@ -7,7 +7,7 @@ Nowadays, smartphones are more and more powerful, and people slowly move their a
This is why we decided to port our interface to mobile devices.
Desktop devices and mobile devices are very different.
There are many differences in terms of performance: desktop devices tend to be much more powerful and have much better memory network connection than mobile devices.
Also, the interaction is not comparable in any way: the desktop mostly uses keyboard and mouse, whereas most of the mobile devices only have a touchscreen, as well as many sensors (accelerometer, gyroscope, GPS, etc\ldots).
Also, the interaction is not comparable in any way: the desktop mostly uses keyboard and mouse, whereas most of the mobile devices only have a touchscreen, as well as many sensors (accelerometer, gyroscope, GPS, etc.).
This is why porting our DASH-3D client to mobile is not an easy task.
To do so, we add some widgets on the screen to support touch interaction: a virtual joystick is displayed on the screen and the user can touch it to translate the camera, instead of using the W, A, S and D keys on a computer.

View File

@@ -5,7 +5,7 @@
Before conducting the user study on mobile devices, we designed a user study for desktop devices.
This experiment was conducted on a little more than a dozen of people, with the model described in the previous chapter.
Bookmarks were positioned from user-generated panoramic picture available on Google Maps, and the task consisted in retrieving spots on the 3D model from a picture: users were presented with an image coming from Google Street View and user had to find the corresponding spot in the 3D model.
Bookmarks were positioned from user-generated panoramic picture available on Google Maps, and the task consisted in retrieving spots on the 3D model from a picture: users were presented with an image coming from Google Street View and they had to find the corresponding spot in the 3D model.
Due to the fact the task was hard, and that our users were familiar with 3D navigation, they preferred navigating slowly in the scene, and did not use bookmarks as much as they did during the experiment we ran in Chapter~\ref{bi}.
@@ -49,7 +49,7 @@ The order of those two sessions is randomized to avoid biases.
% Since we know that the difference between our streaming policies is subtle, we designed a task a little more complex in order to highlight the differences so that the user can see it.
Since the behaviours of our streaming policy only differ when the user clicks a bookmark, we designed a task where the users have to perform a guided tour of the scene, where each bookmark is a step of the tour.
The user starts in the scene, and one of the bookmarks is blinking.
The user has to click the bookmark, and wait a little when he arrives at the destination.
The user has to touch the bookmark, and wait a little when he arrives at the destination.
Once some data has been downloaded, and the user is satisfied with the data downloaded, they can look for the next blinking bookmarks.
This setup is repeated for each streaming policy, and after the two sessions, the users have to answer a questionnaire asking the question \emph{In what session did you find the streaming the smoothest?}
The questionnaire also has a text field for users to explain their answer if they wish.
@@ -91,7 +91,7 @@ We could argue that they do not like the bookmarks because they make the task to
\subsubsection{Qualitative results --- Streaming}
Among the 18 participants of this user study, 10 confirmed that they preferred the optimized policy, 4 preferred the greedy policy, and 4 did not perceive the difference.
Another interesting fact is that on the last part of the experiment (the free navigation) the average number of clicks on bookmarks is 3 for users having the greedy policy and 5.3 for users having the optimized policy.
Another interesting fact is that on the last part of the experiment (the free navigation), the average number of clicks on bookmarks is 3 for users having the greedy policy, and 5.3 for users having the optimized policy.
Even though statistical significance is not reached, this result seems to indicate that a policy optimized for bookmarks could lead users to click more on bookmarks.
\subsubsection{Quantitative results}
@@ -100,7 +100,7 @@ By collecting all the traces during the experiments, we are able to replay the r
Figure~\ref{sb:psnr-second-experiment} shows the average PSNR that user got while navigating during the second experiment (bookmark path).
Below the PSNR curve is a curve that shows how many users were moving to or staying at a bookmark position.
As we can see, the two policies perform in the same way in the beginning when few users are moving to a bookmarks.
However, when they start clicking on bookmarks, the gap grows and our optimized policy perform better.
However, when they start clicking on bookmarks, the gap grows and our optimized policy performs better.
Figure~\ref{sb:psnr-second-experiment-after-click} shows the PSNR after a click on a bookmark.
To compute these curves, we isolated the ten seconds after each click on a bookmark that occurs and we averaged them all.
These curves isolate the effect of our optimized policy, and shows the difference a user can feel when clicking on a bookmark.