Some work
This commit is contained in:
parent
53b7f43f10
commit
abd5cb261c
|
@ -1,51 +1,29 @@
|
|||
let camera, scene, renderer;
|
||||
let geometry, material, mesh;
|
||||
// Computes the aspect ratio of the window.
|
||||
let aspectRatio = window.innerWidth / window.innerHeight;
|
||||
|
||||
init();
|
||||
animate();
|
||||
// Creates a camera and sets its parameters and position.
|
||||
let camera = new THREE.PerspectiveCamera(70, aspectRatio, 0.01, 10);
|
||||
camera.position.z = 1;
|
||||
|
||||
function init() {
|
||||
// Creates the scene that contains our objects.
|
||||
let scene = new THREE.Scene();
|
||||
|
||||
// Computes the aspect ratio of the window.
|
||||
let aspectRatio = window.innerWidth / window.innerHeight;
|
||||
// Creates a geometry (vertices and faces) corresponding to a cube.
|
||||
let geometry = new THREE.BoxGeometry(0.2, 0.2, 0.2);
|
||||
|
||||
// Creates a camera and sets its parameters and position.
|
||||
camera = new THREE.PerspectiveCamera(70, aspectRatio, 0.01, 10);
|
||||
camera.position.z = 1;
|
||||
// Creates a material that paints the faces depending on their normal.
|
||||
let material = new THREE.MeshNormalMaterial();
|
||||
|
||||
// Creates the scene that contains our objects.
|
||||
scene = new THREE.Scene();
|
||||
// Creates a mesh that associates the geometry with the material.
|
||||
let mesh = new THREE.Mesh(geometry, material);
|
||||
|
||||
// Creates a geometry (vertices and faces) corresponding to a cube.
|
||||
geometry = new THREE.BoxGeometry(0.2, 0.2, 0.2);
|
||||
// Adds the mesh to the scene.
|
||||
scene.add(mesh);
|
||||
|
||||
// Creates a material that paints the faces depending on their normal.
|
||||
material = new THREE.MeshNormalMaterial();
|
||||
// Creates the renderer and append its canvas to the DOM.
|
||||
renderer = new THREE.WebGLRenderer({ antialias: true });
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
document.body.appendChild(renderer.domElement);
|
||||
|
||||
// Creates a mesh that associates the geometry with the material.
|
||||
mesh = new THREE.Mesh(geometry, material);
|
||||
|
||||
// Adds the mesh to the scene.
|
||||
scene.add(mesh);
|
||||
|
||||
// Creates the renderer and append its canvas to the DOM.
|
||||
renderer = new THREE.WebGLRenderer({ antialias: true });
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
document.body.appendChild(renderer.domElement);
|
||||
|
||||
}
|
||||
|
||||
// This function will be called at each frame.
|
||||
function animate() {
|
||||
|
||||
// Makes the mesh rotate to have a nice animation
|
||||
mesh.rotation.x += 0.01;
|
||||
mesh.rotation.y += 0.02;
|
||||
|
||||
// Renders the scene with the camera.
|
||||
renderer.render(scene, camera);
|
||||
|
||||
// Re-triggers animate() when the moment has come.
|
||||
requestAnimationFrame(animate);
|
||||
|
||||
}
|
||||
// Renders the scene with the camera.
|
||||
renderer.render(scene, camera);
|
||||
|
|
|
@ -13,12 +13,6 @@ All DASH clients are built from the same basic bricks, as shown in Figure~\ref{d
|
|||
\item the \emph{media engine}, which renders the multimedia content to the screen and the user interface.
|
||||
\end{itemize}
|
||||
|
||||
We want to have two implementations of such a client:
|
||||
\begin{itemize}
|
||||
\item \textbf{one in JavaScript}, so we can easily have demos and conduct user-studies with real users trying the real interface on different devices (desktop or mobile);
|
||||
\item \textbf{one in Rust}, so we can easily run simulations with maximum performance to be able to compare different setups or parameters with more precision.
|
||||
\end{itemize}
|
||||
|
||||
\tikzset{
|
||||
double arrow/.style args={#1 colored by #2 and #3}{
|
||||
-stealth,line width=#1,#2, % first arrow
|
||||
|
@ -96,3 +90,9 @@ We want to have two implementations of such a client:
|
|||
\end{tikzpicture}
|
||||
\caption{Scheme of a server and a DASH client\label{d3i:dash-scheme}}
|
||||
\end{figure}
|
||||
|
||||
We want to have two implementations of such a client:
|
||||
\begin{itemize}
|
||||
\item \textbf{one in JavaScript}, so we can easily have demos and conduct user-studies with real users trying the real interface on different devices (desktop or mobile);
|
||||
\item \textbf{one in Rust}, so we can easily run simulations with maximum performance to be able to compare different setups or parameters with more precision.
|
||||
\end{itemize}
|
||||
|
|
|
@ -50,50 +50,117 @@ Therefore, we made a class that derives BufferGeometry, and that makes it more c
|
|||
\item It also keeps track of what part of the buffers has been transmitted to the GPU\@: THREE.js allows us to set the range of the buffer that we want to update and we are able to update only what is necessary.
|
||||
\end{itemize}
|
||||
|
||||
\subsubsection{Our 3D model class}
|
||||
\subsubsection{Our 3D model class\label{d3i:model-class}}
|
||||
|
||||
As said in the previous sections, a geometry and a material a bound together in a mesh.
|
||||
This means that we are forced to have has many meshes as there are materials in our model.
|
||||
To make this easy to manage, we made a \textbf{Model} class, that holds everything we need.
|
||||
We can add vertices, faces, and materials to this model, and it will internally deal with the right geometries, materials and meshes.
|
||||
In order to avoid having many models that have the same material (which would harm performance since it would increase the number of \texttt{glDrawArray} calls), it automatically merges faces that share the same material in the same buffer geometry, as shown in Figure~\ref{d3i:render-structure}.
|
||||
|
||||
\begin{figure}[ht]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
|
||||
\node[align=center] at(1.5, -1) {DASH-3D\\Structure};
|
||||
|
||||
\node at(-1.5, -3) {\texttt{seg0.obj}};
|
||||
\draw[fill=Pink] (0, -2) rectangle (3, -3);
|
||||
\node at(1.5, -2.5) {Material 1};
|
||||
\draw[fill=PaleGreen] (0, -3) rectangle (3, -4);
|
||||
\node at(1.5, -3.5) {Material 2};
|
||||
|
||||
\node at(-1.5, -6) {\texttt{seg1.obj}};
|
||||
\draw[fill=Pink] (0, -5) rectangle (3, -7);
|
||||
\node at(1.5, -6) {Material 1};
|
||||
|
||||
\node at(-1.5, -9) {\texttt{seg2.obj}};
|
||||
\draw[fill=PaleGreen] (0, -8) rectangle (3, -9);
|
||||
\node at(1.5, -8.5) {Material 2};
|
||||
\draw[fill=LightBlue] (0, -9) rectangle (3, -10);
|
||||
\node at(1.5, -9.5) {Material 3};
|
||||
|
||||
\node[align=center] at (7.5, -1) {Renderer\\Structure};
|
||||
|
||||
\node at(10.5, -3.5) {Object 1};
|
||||
\draw[fill=Pink] (6, -2) rectangle (9, -5);
|
||||
\node at(7.5, -3.5) {Material 1};
|
||||
|
||||
\node at(10.5, -7) {Object 2};
|
||||
\draw[fill=PaleGreen] (6, -6) rectangle (9, -8);
|
||||
\node at(7.5, -7) {Material 2};
|
||||
|
||||
\node at(10.5, -9.5) {Object 3};
|
||||
\draw[fill=LightBlue] (6, -9) rectangle (9, -10);
|
||||
\node at(7.5, -9.5) {Material 3};
|
||||
|
||||
\node[minimum width=0.5cm,minimum height=2cm] (O1) at (6.25, -3.5) {};
|
||||
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
|
||||
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
|
||||
|
||||
\node[minimum width=0.5cm,minimum height=2cm] (O2) at (6.25, -7) {};
|
||||
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
|
||||
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
|
||||
|
||||
\node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
|
||||
\draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
|
||||
|
||||
\node at (1.5, -10.75) {$\vdots$};
|
||||
\node at (7.5, -10.75) {$\vdots$};
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Restructuration of the content on the renderer\label{d3i:render-structure}}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Access client}
|
||||
|
||||
In order to be able to implement our DASH-3D client, we need to implement the access client, which is responsible for deciding what to download and download it.
|
||||
To do so, we use the strategy pattern, as shown in Figure~\ref{d3i:dash-loader}.
|
||||
We have a base class named \texttt{LoadingPolicy} that contain some attributes and functions to keep data about what has been downloaded that a derived class can use to make smart decisions, and exposes a function named \texttt{nextSegment} that takes two arguments:
|
||||
\begin{itemize}
|
||||
\item the MPD, so that a strategy can know all the metadata of the segments before making its decision;
|
||||
\item the camera, because the best segment depends on the position of the camera.
|
||||
\end{itemize}
|
||||
|
||||
The greedy, greedy predictive and proposed policies from the previous chapter are all classes that derive from \texttt{LoadingPolicy}.
|
||||
Then, the main class responsible for the loading of segments is the \texttt{DashLoader} class.
|
||||
It uses \texttt{XMLHttpRequest}s, which are the usual way of making HTTP requests in JavaScript, and it calls the corresponding parser on the results of those requests.
|
||||
The \texttt{DashLoader} class accepts as parameter a function that will be called each time some data has been downloaded and parsed: this data can contain vertices, texture coordinates, normals, materials or textures, and they can all be added to the \texttt{Model} class that we described in Section~\ref{d3i:model-class}.
|
||||
|
||||
\begin{figure}[ht]
|
||||
\centering
|
||||
\begin{tikzpicture}[scale=0.8]
|
||||
\draw (-1, 0) rectangle (4, -4);
|
||||
\draw (-1, -1) -- (4, -1);
|
||||
\node at (1.5, -0.5) {\large DashClient};
|
||||
\node[right] at (-1, -1.5) {loadNextSegment()};
|
||||
\begin{tikzpicture}[scale=0.65]
|
||||
\draw (0, 0) rectangle (5, -4);
|
||||
\draw (0, -1) -- (5, -1);
|
||||
\node at (2.5, -0.5) {DashClient};
|
||||
\node[right] at (0, -1.5) {\scriptsize loadNextSegment()};
|
||||
|
||||
\draw (4, -2) -- (6, -2);
|
||||
\draw (5, -2) -- (8, -2);
|
||||
|
||||
\draw (6, 0) rectangle (10, -4);
|
||||
\draw (6, -1) -- (10, -1);
|
||||
\node at (8, -0.5) {\large LoadingPolicy};
|
||||
\node[right] at (6, -1.5) {nextSegment()};
|
||||
\draw (8, 0) rectangle (14, -4);
|
||||
\draw (8, -1) -- (14, -1);
|
||||
\node at (11, -0.5) {LoadingPolicy};
|
||||
\node[right] at (8, -1.5) {\scriptsize nextSegment(mpd, camera)};
|
||||
|
||||
\draw (3, -6) rectangle (7, -10);
|
||||
\draw (3, -7) -- (7, -7);
|
||||
\node at (5, -6.5) {\large Greedy};
|
||||
\node[right] at (3, -7.5) {nextSegment()};
|
||||
\draw (1, -6) rectangle (7, -10);
|
||||
\draw (1, -7) -- (7, -7);
|
||||
\node at (4, -6.5) {Greedy};
|
||||
\node[right] at (1, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
||||
|
||||
\draw (9, -6) rectangle (13, -10);
|
||||
\draw (9, -7) -- (13, -7);
|
||||
\node at (11, -6.5) {\large GreedyPredictive};
|
||||
\node[right] at (9, -7.5) {nextSegment()};
|
||||
\draw (8, -6) rectangle (14, -10);
|
||||
\draw (8, -7) -- (14, -7);
|
||||
\node at (11, -6.5) {GreedyPredictive};
|
||||
\node[right] at (8, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
||||
|
||||
\draw (15, -6) rectangle (19, -10);
|
||||
\draw (15, -7) -- (19, -7);
|
||||
\node at (17, -6.5) {\large Proposed};
|
||||
\node[right] at (15, -7.5) {nextSegment()};
|
||||
\draw (15, -6) rectangle (21, -10);
|
||||
\draw (15, -7) -- (21, -7);
|
||||
\node at (18, -6.5) {Proposed};
|
||||
\node[right] at (15, -7.5) {\scriptsize nextSegment(mpd, camera)};
|
||||
|
||||
\draw[-{Triangle[open, length=3mm, width=3mm]}] (5, -6) -- (5, -5) -- (8, -5) -- (8, -4);
|
||||
\draw[-{Triangle[open, length=3mm, width=3mm]}] (4, -6) -- (4, -5) -- (11, -5) -- (11, -4);
|
||||
\draw (11, -6) -- (11, -5) -- (8, -5);
|
||||
\draw (17, -6) -- (17, -5) -- (8, -5);
|
||||
\draw (18, -6) -- (18, -5) -- (8, -5);
|
||||
\end{tikzpicture}
|
||||
\caption{Class diagram of our DASH client\label{d3i:dash-loader}}
|
||||
\end{figure}
|
||||
|
|
|
@ -6,9 +6,6 @@
|
|||
\input{dash-3d-implementation/introduction}
|
||||
\resetstyle{}
|
||||
|
||||
\input{dash-3d-implementation/media-engine}
|
||||
\resetstyle{}
|
||||
|
||||
\input{dash-3d-implementation/js-implementation}
|
||||
\resetstyle{}
|
||||
|
||||
|
|
|
@ -1,68 +0,0 @@
|
|||
\fresh{}
|
||||
\section{Media engine}
|
||||
|
||||
In order to conduct user study of 3D streaming systems on mobile devices, we must ensure that our system is running at a reasonable framerate.
|
||||
|
||||
\subsection{Restructuration of the 3D content on the client}
|
||||
|
||||
To sum up the bottlenecks of performance described in Section~\ref{i:rendering}, we need to keep the least amount of objects possible, to do the least amount of \texttt{glDrawArray}, but we want to have separated objects so that frustum culling is efficient.
|
||||
In our NVE, there are more than a thousand materials and they are reused all over the scene.
|
||||
Creating local objects to benefit from frustum culling is not an option in our case, so we merge all faces that share material and draw them in a single \texttt{glDrawArray} call.
|
||||
|
||||
In order to do so, the content that was previously structured in a way to optimize streaming is restructured live on the client to optimize rendering, as show in Figure~\ref{d3:render-structure}.
|
||||
That way, when geometry is downloaded for a new material, we know how much space we should allocate for the buffers, and when geometry is downloaded for a material already allocated, we just edit the buffer and transmit it to the GPU\@.
|
||||
|
||||
\begin{figure}[ht]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
|
||||
\node[align=center] at(1.5, -1) {DASH-3D\\Structure};
|
||||
|
||||
\node at(-1.5, -3) {\texttt{seg0.obj}};
|
||||
\draw[fill=Pink] (0, -2) rectangle (3, -3);
|
||||
\node at(1.5, -2.5) {Material 1};
|
||||
\draw[fill=PaleGreen] (0, -3) rectangle (3, -4);
|
||||
\node at(1.5, -3.5) {Material 2};
|
||||
|
||||
\node at(-1.5, -6) {\texttt{seg1.obj}};
|
||||
\draw[fill=Pink] (0, -5) rectangle (3, -7);
|
||||
\node at(1.5, -6) {Material 1};
|
||||
|
||||
\node at(-1.5, -9) {\texttt{seg2.obj}};
|
||||
\draw[fill=PaleGreen] (0, -8) rectangle (3, -9);
|
||||
\node at(1.5, -8.5) {Material 2};
|
||||
\draw[fill=LightBlue] (0, -9) rectangle (3, -10);
|
||||
\node at(1.5, -9.5) {Material 3};
|
||||
|
||||
\node[align=center] at (7.5, -1) {Renderer\\Structure};
|
||||
|
||||
\node at(10.5, -3.5) {Object 1};
|
||||
\draw[fill=Pink] (6, -2) rectangle (9, -5);
|
||||
\node at(7.5, -3.5) {Material 1};
|
||||
|
||||
\node at(10.5, -7) {Object 2};
|
||||
\draw[fill=PaleGreen] (6, -6) rectangle (9, -8);
|
||||
\node at(7.5, -7) {Material 2};
|
||||
|
||||
\node at(10.5, -9.5) {Object 3};
|
||||
\draw[fill=LightBlue] (6, -9) rectangle (9, -10);
|
||||
\node at(7.5, -9.5) {Material 3};
|
||||
|
||||
\node[minimum width=0.5cm,minimum height=2cm] (O1) at (6.25, -3.5) {};
|
||||
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -2.5) -- (O1);
|
||||
\draw[-{Latex[length=3mm]}, color=FireBrick] (3, -6) -- (O1);
|
||||
|
||||
\node[minimum width=0.5cm,minimum height=2cm] (O2) at (6.25, -7) {};
|
||||
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -3.5) -- (O2);
|
||||
\draw[-{Latex[length=3mm]}, color=DarkGreen] (3, -8.5) -- (O2);
|
||||
|
||||
\node[minimum width=3cm,minimum height=2cm] (O3) at (7.5, -9.5) {};
|
||||
\draw[-{Latex[length=3mm]}, color=RoyalBlue] (3, -9.5) -- (O3);
|
||||
|
||||
\node at (1.5, -10.75) {$\vdots$};
|
||||
\node at (7.5, -10.75) {$\vdots$};
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Restructuration of the content on the renderer\label{d3:render-structure}}
|
||||
\end{figure}
|
|
@ -22,7 +22,6 @@ It is probably for those reasons that Rust is the \emph{most loved programming l
|
|||
Our requirements are quite different that the ones we had to deal with in our JavaScript implementation.
|
||||
In this setup, we want to build a system that is the closest to our theoritical concepts.
|
||||
Therefore, we do not have a full client in Rust (meaning an application to which you would give the URL to an MPD file and that would allow you to navigate in the scene while it is being downloaded).
|
||||
Doing so is feasible, of course, but not what we want.
|
||||
In order to be able to run simulations, we develop the bricks of the DASH client separately: the access client and the media engine are totally separated:
|
||||
\begin{itemize}
|
||||
\item the \textbf{simulator} takes a user trace as a parameter, it then replays the trace using specific parameters of the access client and outputs a file containing the history of the simulation (what files have been downloaded, and when);
|
||||
|
@ -30,3 +29,4 @@ In order to be able to run simulations, we develop the bricks of the DASH client
|
|||
\end{itemize}
|
||||
When simulating experiments, we will run the simulator on many traces that we collected during user-studies, and we will then run the renderer program on it to generate images corresponding to the simulation.
|
||||
We are then able to compute PSNR between those frames and the ground truth frames.
|
||||
Doing so guarantees us that our simulator is not affected by the performances of our renderer.
|
||||
|
|
Loading…
Reference in New Issue