\section{Evaluation}\label{sec:eval} We now describe our setup and the data we use in our experiments. We present an evaluation of our system and a comparison of the impact of the design choices we introduced in the previous sections. \subsection{Experimental Setup} \subsubsection{Model} We use a city model of the Marina Bay area in Singapore in our experiments. The model came in 3DS Max format and has been converted into Wavefront OBJ format before the processing described in Section~\ref{sec:dash3d}. The converted model has 387,551 vertices and 552,118 faces. Table~\ref{table:size} gives some general information about the model. We partition the geometry into a k-$d$ tree until the leafs have less than 10000 faces, which gives us 64 adaptation sets, plus one containing the large faces. \begin{table}[th] \centering \begin{tabular}{ll} \toprule \textbf{Files} & \textbf{Size} \\ \midrule 3DS Max & 55 MB \\ OBJ file & 62 MB\\ MTL file & 0.27MB \\ Textures (high res) & 167 MB \\ Textures (low res) & 11 MB \\ \bottomrule \end{tabular} \caption{Sizes of the different files of the model\label{table:size}} \end{table} \subsubsection{User Navigations} To evaluate our system, we collected realistic user navigation traces that we can replay in our experiments. We presented six users with a web interface, on which the model was loaded progressively as the user could interact with it. The available interactions were inspired by traditional first-person interactions in video games, i.e., W, A, S, and D keys to translate the camera, and mouse to rotate the camera. We asked users to browse and explore the scene until they felt they had visited all important regions. We then asked them to produce camera navigation paths that would best present the 3D scene to a user that would discover it. To record a path, the users first place their camera to their preferred starting point, then click on a button to start recording. Every 100ms, the position, viewing angle of the camera and look-at point are saved into an array that will then be exported into JSON format. The recorded camera trace allows us to replay each camera path to perform our simulations and evaluate our system. We collected 13 camera paths this way. \subsubsection{Network Setup} We tested our implementation under three network bandwidth of 2.5 Mbps, 5 Mbps, and 10 Mbps with an RTT of 38 ms, following the settings from DASH-IF~\cite{DASH_NETWORK_PROFILE}. The values are kept constant during the entire client session to analyze the difference in magnitude of performance by increasing the bandwidth. In our experiments, we set up a virtual camera that moves along a navigation path, and our access engine downloads segments in real time according to Algorithm~\ref{algorithm:nextsegment}. We log in a JSON file the time when a segment is requested and when it is received. By doing so, we avoid wasting time and resources to evaluate our system while downloading segments and store all the information necessary to plot the figures introduced in the subsequent sections. \subsubsection{Hardware and Software} The experiments were run on an Acer Aspire V3 with an Intel Core i7 3632QM processor and an NVIDIA GeForce GT 740M graphics card. The DASH client is written in Rust\footnote{\url{https://www.rust-lang.org/}}, using Glium\footnote{\url{https://github.com/glium/glium}} for rendering, and reqwest\footnote{\url{https://github.com/seanmonstar/reqwest/}} to load the segments. \subsubsection{Metrics} To objectively evaluate the quality of the resulting rendering, we use PSNR\@. The scene as rendered offline using the same camera path with all the geometry and texture data available is used as ground truth. Note that a pixel error can occur in our case only in two situations: (i) when a face is missing, in which case the color of the background object is shown, and (ii) when a texture is either missing or downsampled. We do not have pixel error due to compression. \subsubsection{Experiments} We present experiments to validate our implementation choices at every step of our system. We replay the user-generated camera paths with various bandwidth conditions while varying key components of our system. Table~\ref{table:experiments} sums up all the components we varied in our experiments. We compare the impact of two space-partitioning trees, a $k$-d tree and an Octree, on content preparation. We also try several utility metrics for geometry segments: an offline one, which assigns to each geometry segment $s^G$ the cumulated 3D area of its belonging faces $\mathcal{A}_{3D}(s^G)$; an online one, which assigns to each geometry segment the inverse of its distance to the camera position; and finally our proposed method, as described in Section~\ref{subsec:utility} ($\mathcal{A}_{3D}(s^G)/ \mathcal{D}{(v{(t_i)},AS^G)}^2$). We consider two streaming policies to be applied by the client, proposed in Section~\ref{sec:dashclientspec}. The greedy strategy determines, at each decision time, the segment that maximizes its predicted utility at arrival divided by its predicted delivery delay, which corresponds to equation (\ref{eq:greedy}). The second streaming policy that we run is the one we proposed in equation (\ref{eq:smart}). We have also analyzed the effect of grouping the faces in geometry segments of an adaptation set based on their 3D area. Finally, we try several bandwidth parameters to study how our system can adapt to varying network conditions. \begin{table}[th] \centering \begin{tabular}{@{}ll@{}} \toprule \textbf{Parameters} & \textbf{Values} \\\midrule Content preparation & Octree, $k$-d tree \\ Utility & Offline, Online, Proposed \\ Streaming policy & Greedy, Proposed \\ Grouping of Segments & Sorted based on area, Unsorted\\ Bandwidth & 2.5 Mbps, 5 Mbps, 10 Mbps \\\bottomrule \end{tabular} \caption{Different parameters in our experiments\label{table:experiments}} \end{table} \subsection{Experimental Results} \begin{figure}[th] \centering \begin{tikzpicture} \begin{axis}[ xlabel=Time (in s), ylabel=PSNR, no markers, cycle list name=mystyle, width=\tikzwidth, height=\tikzheight, legend pos=south east, xmin=0, xmax=90, x label style={at={(axis description cs:0.5,0.05)},anchor=north}, y label style={at={(axis description cs:0.125,.5)},anchor=south}, ] \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/1/curve.dat}; \addlegendentry{\scriptsize $k$-d tree} \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/2/curve.dat}; \addlegendentry{\scriptsize octree} \end{axis} \end{tikzpicture} \caption{Impact of the space-partitioning tree on the rendering quality with a 5Mbps bandwidth.\label{fig:preparation}} \end{figure} Figure~\ref{fig:preparation} shows how the space partition can affect the rendering quality. We use our proposed utility metrics (see Section~\ref{subsec:utility}) and streaming policy from Equation (\ref{eq:smart}), on content divided into adaptation sets obtained either using a $k$-d tree or an Octree and run experiments on all camera paths at 5 Mbps. The octree partitions content into non-homogeneous adaptation sets; as a result, some adaptation sets may contain smaller segments, which contain both important (large) and non-important polygons. For the $k$-d tree, we create cells containing the same number of faces $N_a$ (here, we take $N_a=10k$). Figure~\ref{fig:preparation} shows that the system seems to be slightly less efficient with an Octree than with a $k$-d tree based partition, but this result is not significant. For the remaining experiments, partitioning is based on a $k$-d tree. \begin{figure}[th] \centering \begin{tikzpicture} \begin{axis}[ xlabel=Time (in s), ylabel=PSNR, no markers, cycle list name=mystyle, width=\tikzwidth, height=\tikzheight, legend pos=south east, xmin=0, xmax=90, x label style={at={(axis description cs:0.5,0.05)},anchor=north}, y label style={at={(axis description cs:0.125,.5)},anchor=south}, ] \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/1/curve.dat}; \addlegendentry{\scriptsize Proposed} \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/3/curve.dat}; \addlegendentry{\scriptsize Online only} \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/4/curve.dat}; \addlegendentry{\scriptsize Offline only} \end{axis} \end{tikzpicture} \caption{Impact of the segment utility metric on the rendering quality with a 5Mbps bandwidth.\label{fig:utility}} \end{figure} Figure~\ref{fig:utility} displays how a utility metric should take advantage of both offline and online features. The experiments consider $k$-d tree cell for adaptation sets and the proposed streaming policy, on all camera paths. We observe that a purely offline utility metric leads to poor PSNR results. An online-only utility improves the results, as it takes the user viewing frustum into consideration, but still, the proposed utility (in Section~\ref{subsec:utility}) performs better. \begin{figure}[th] \centering \begin{tikzpicture} \begin{axis}[ xlabel=Time (in s), ylabel=PSNR, no markers, cycle list name=mystyle, width=\tikzwidth, height=\tikzheight, legend pos=south east, xmin=0, xmax=90, x label style={at={(axis description cs:0.5,0.05)},anchor=north}, y label style={at={(axis description cs:0.125,.5)},anchor=south}, ] \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/1/curve.dat}; \addlegendentry{\scriptsize Sorting the faces by area} \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/5/curve.dat}; \addlegendentry{\scriptsize Without sorting the faces} \end{axis} \end{tikzpicture} \caption{Impact of creating the segments of an adaptation set based on decreasing 3D area of faces with a 5Mbps bandwidth.}\label{fig:sorting} \end{figure} Figure~\ref{fig:sorting} shows the effect of grouping the segments in an adaptation set based on their area in 3D. Clearly, the PSNR significantly improves when the 3D area of faces is considered for creating the segments. Since all segments are of the same size, sorting the faces by area before grouping them into segments leads to a skew distribution of how useful the segments are. This skewness means that the decision that the client makes (to download those with the largest utility first) can make a bigger difference in the quality. We also compared the greedy vs.\ proposed streaming policy (as shown in Figure~\ref{fig:greedyweakness}) for limited bandwidth (5 Mbps). The proposed scheme outperforms the greedy during the first 30s and does a better job overall. Table~\ref{table:greedyVsproposed} shows the average PSNR for the proposed method and the greedy method for different downloading bandwidth. In the first 30 sec, since there are relatively few 3D contents downloaded, making a better decision at what to download matters more: we observe during that time that the proposed method leads to 1 --- 1.9 dB better in quality terms of PSNR compared to Greedy. Table~\ref{table:perc} shows the distribution of texture resolutions that are downloaded by greedy and our Proposed scheme, at different bandwidths. Resolution 5 is the highest and 1 is the lowest. The table clearly shows a weakness of the greedy policy: as the bandwidth increases, the distribution of downloaded textures resolution stays more or less the same. In contrast, our proposed streaming policy adapts to an increasing bandwidth by downloading higher resolution textures (13.9\% at 10 Mbps, vs. 0.3\% at 2.5 Mbps). In fact, an interesting feature of our proposed streaming policy is that it adapts the geometry-texture compromise to the bandwidth. The textures represent 57.3\% of the total amount of downloaded bytes at 2.5 Mbps, and 70.2\% at 10 Mbps. In other words, our system tends to favor geometry segments when the bandwidth is low, and favor texture segments when the bandwidth increases. \begin{figure}[th] \centering \begin{tikzpicture} \begin{axis}[ xlabel=Time (in s), ylabel=PSNR, no markers, cycle list name=mystyle, width=\tikzwidth, height=\tikzheight, legend pos=south east, xmin=0, xmax=90, x label style={at={(axis description cs:0.5,0.05)},anchor=north}, y label style={at={(axis description cs:0.125,.5)},anchor=south}, ] \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/1/curve.dat}; \addlegendentry{\scriptsize Proposed} \addplot table [y=psnr, x=time]{assets/dash-3d/gnuplot/6/curve.dat}; \addlegendentry{\scriptsize Greedy} \end{axis} \end{tikzpicture} \caption{Impact of the streaming policy (greedy vs.\ proposed) with a 5 Mbps bandwidth.}\label{fig:greedyweakness} \end{figure} \begin{table}[th] \centering \begin{tabular}{@{}p{2.5cm}p{0.7cm}p{0.7cm}p{0.7cm}p{0.3cm}p{0.7cm}p{0.7cm}p{0.7cm}@{}} \toprule \multirow{2}{1.7cm}{} & \multicolumn{3}{c}{\textbf{First 30 Sec}} & & \multicolumn{3}{c}{\textbf{Overall}}\\ \cline{2-8} BW (in Mbps) & 2.5 & 5 & 10 & & 2.5 & 5 & 10 \\ \midrule Greedy & 14.4 & 19.4 & 22.1 & & 19.8 & 26.9 & 29.7 \\ Proposed & 16.3 & 20.4 & 23.2 & & 23.8 & 28.2 & 31.1 \\ \bottomrule \end{tabular} \caption{Average PSNR, Greedy vs. Proposed\label{table:greedyVsproposed}} \end{table} \begin{table}[th] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{@{}cccc@{}} \toprule \textbf{Resolutions} & \textbf{2.5 Mbps} & \textbf{5 Mbps} & \textbf{10 Mbps} \\ \midrule 1 & 5.7\% vs 1.4\% & 6.3\% vs 1.4\% & 6.17\% vs 1.4\% \\ 2 & 10.9\% vs 8.6\% & 13.3\% vs 7.8\% & 14.0\% vs 8.3\%\\ 3 & 15.3\% vs 28.6\% & 20.1\% vs 24.3\% & 20.9\% vs 22.5\% \\ 4 & 14.6\% vs 18.4\% & 14.4\% vs 25.2\% & 14.2\% vs 24.1\% \\ 5 & 11.4\% vs 0.3\% & 11.1\% vs 5.9\% & 11.5\% vs 13.9\% \\\bottomrule \end{tabular} \caption{Percentages of downloaded bytes for textures from each resolution, for the greedy streaming policy (left) and for our proposed scheme (right)\label{table:perc}} \end{table}