gg
This commit is contained in:
@@ -1,11 +1,13 @@
|
||||
A 3D streaming system is a system that collects 3D data and dynamically renders it.
|
||||
A 3D streaming system is a system that dynamically collects 3D data.
|
||||
The previous chapter voluntarily remained vague about what \emph{3D data} actually is.
|
||||
This chapter presents in detail what 3D data is and how it is renderer, and gives insights about interaction and streaming by comparing the 3D case to the video one.
|
||||
This chapter presents in detail the 3D data we consider and how it is renderer.
|
||||
We also give insights about interaction and streaming by comparing the 3D case to the video one.
|
||||
|
||||
\section{What is a 3D model?}
|
||||
|
||||
\subsection{3D data}
|
||||
A 3D model consists in a set of data.
|
||||
Most classical 3D models are set of mesh and textures, that can potentially be arranged in a scene graph.
|
||||
Such a model can typically contain the following:
|
||||
|
||||
\begin{itemize}
|
||||
\item \textbf{Vertices} are simply 3D points;
|
||||
@@ -15,25 +17,25 @@ A 3D model consists in a set of data.
|
||||
\item \textbf{Normals} are 3D vectors that can give information about light behaviour on a face.
|
||||
\end{itemize}
|
||||
|
||||
The Wavefront OBJ is one of the most popular format and describes all these elements in text format.
|
||||
The Wavefront OBJ is one of the most popular format that describes all these elements in text format.
|
||||
A 3D model encoded in the OBJ format typically consists in two files: the materials file (\texttt{.mtl}) and the object file (\texttt{.obj}).
|
||||
|
||||
\paragraph{}
|
||||
The materials file declares all the materials that the object file will reference.
|
||||
A material consists in name, and other photometric properties such as ambient, diffuse and specular colors, as well as texture maps.
|
||||
Each face correspond to a material and a renderer can use the material's information to render the faces in a specific way.
|
||||
Each face correspond to a material and a renderer can use the material's information to render the faces.
|
||||
A simple material file is visible on Listing~\ref{i:mtl}.
|
||||
|
||||
\paragraph{}
|
||||
The object file declares the 3D content of the objects.
|
||||
It declares vertices, texture coordinates and normals from coordinates (e.g.\ \texttt{v 1.0 2.0 3.0} for a vertex, \texttt{vt 1.0 2.0} for a texture coordinate, \texttt{vn 1.0 2.0 3.0} for a normal).
|
||||
These elements are numbered starting from 1.
|
||||
Faces are declared by using the indices of these elements. A face is a polygon with any number of vertices and can be declared in multiple manners:
|
||||
Faces are declared by using the indices of these elements. A face is a polygon with an arbitrary number of vertices and can be declared in multiple manners:
|
||||
|
||||
\begin{itemize}
|
||||
\item \texttt{f 1 2 3} defines a triangle face that joins the first, the second and the third vertex declared;
|
||||
\item \texttt{f 1/1 2/3 3/4} defines a similar triangle but with texture coordinates, the first texture coordinate is associated to the first vertex, the third texture coordinate is associated to the second vertex, and the fourth texture coordinate is associated with the third vertex;
|
||||
\item \texttt{f 1//1 2//3 3//4} defines a similar triangle but using normals instead of texture coordinates;
|
||||
\item \texttt{f 1//1 2//3 3//4} defines a similar triangle but referencing normals instead of texture coordinates;
|
||||
\item \texttt{f 1/1/1 2/3/3 3/4/4} defines a triangle with both texture coordinates and normals.
|
||||
\end{itemize}
|
||||
|
||||
@@ -68,29 +70,29 @@ An example of object file is visible on Listing~\ref{i:obj}.
|
||||
A typical 3D renderer follows Algorithm~\ref{f:renderer}.
|
||||
|
||||
\begin{algorithm}[th]
|
||||
\SetKwData{Texture}{texture}
|
||||
\SetKwData{Material}{material}
|
||||
\SetKwData{Object}{object}
|
||||
\SetKwData{Geometry}{geometry}
|
||||
\SetKwData{Textures}{all\_textures}
|
||||
\SetKwData{Materials}{all\_materials}
|
||||
\SetKwData{Object}{object}
|
||||
\SetKwData{Scene}{scene}
|
||||
\SetKwData{True}{true}
|
||||
\SetKwFunction{LoadGeometry}{load\_geometry}
|
||||
\SetKwFunction{LoadTexture}{load\_texture}
|
||||
\SetKwFunction{BindTexture}{bind\_texture}
|
||||
\SetKwFunction{LoadMaterial}{load\_material}
|
||||
\SetKwFunction{BindMaterial}{bind\_material}
|
||||
\SetKwFunction{Draw}{draw}
|
||||
|
||||
\tcc{Initialization}
|
||||
\For{$\Object\in\Scene$}{%
|
||||
\LoadGeometry{\Object.\Geometry}\;
|
||||
\LoadTexture{\Object.\Texture}\;
|
||||
\LoadMaterial{\Object.\Material}\;
|
||||
}
|
||||
\BlankLine%
|
||||
\BlankLine%
|
||||
\tcc{Render loop}
|
||||
\While{\True}{%
|
||||
\For{$\Object\in\Scene$}{%
|
||||
\BindTexture{\Object.\Texture}\;
|
||||
\BindMaterial{\Object.\Material}\;
|
||||
\Draw{\Object.\Geometry}\;
|
||||
}
|
||||
}
|
||||
@@ -100,11 +102,11 @@ A typical 3D renderer follows Algorithm~\ref{f:renderer}.
|
||||
|
||||
The first task the renderer needs to perform is sending the data to the GPU\@: this is done in the loading loop at the beginning.
|
||||
This step can be slow, but it is generally acceptable since it only occurs once at the beginning of the program.
|
||||
Then, the renderer starts the rendering loop: at each frame, it renders the whole scene.
|
||||
Then, the renderer starts the rendering loop: at each frame, it renders the whole scene: for each object, it binds the corresponding material to the GPU and then renders the object.
|
||||
During the rendering loop, there are two things to consider regarding performances:
|
||||
\begin{itemize}
|
||||
\item obviously, the more faces a geometry contains, the slower the \texttt{draw} call is;
|
||||
\item the more objects in the scene, the more overhead at each step of the loop.
|
||||
\item the more objects in the scene, the more overhead cause by the CPU/GPU communication at each step of the loop.
|
||||
\end{itemize}
|
||||
|
||||
The way the loop works forces objects with different textures to be rendered separately.
|
||||
@@ -150,5 +152,9 @@ Algorithm~\ref{f:frustum-culling} is a variation of Algorithm~\ref{f:renderer} w
|
||||
\end{algorithm}
|
||||
|
||||
A renderer that uses a single object avoids the overhead, but fails to benefit from frustum culling.
|
||||
A better renderer ensures to have objects that do not spread across the whole scene, since that would lead to a useless frustum culling, and many objects to avoid rendering the whole scene at each frame, but not too many objects to avoid suffering from the overhead.
|
||||
An optimized renderer needs to find a compromise between a too fine partition of the scene, which introduces overhead, and a too coarse partition which introduces useless rendering.
|
||||
|
||||
% ensures to have objects that do not spread across the whole scene, since that would lead to a useless frustum culling, and many objects to avoid rendering the whole scene at each frame.
|
||||
|
||||
% but not too many objects to avoid suffering from the overhead.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\fresh{}
|
||||
\section{Implementation details}
|
||||
|
||||
During this thesis, a lot of software has been written, and for this software to be successful and efficient, we took care of choosing the right languages.
|
||||
During this thesis, a lot of software has been developed, and for this software to be successful and efficient, we took care of choosing the appropriate languages.
|
||||
When it comes to 3D streaming systems, there are two kind of software that we need.
|
||||
\begin{itemize}
|
||||
\item \textbf{Interactive applications} that can run on as many devices as possible whether it be desktop or mobile in order to try and to conduct user studies. For this context, we chose the \textbf{JavaScript language}, since it can run on many devices and it has great support for WebGL\@.
|
||||
|
||||
@@ -2,15 +2,15 @@
|
||||
|
||||
\section{Similarities and differences between video and 3D\label{i:video-vs-3d}}
|
||||
|
||||
Contrary to what one might think, the video streaming scenario and the 3D streaming one share many similarities: at a higher level of abstraction, they are both systems that allow a user to access remote content without having to wait until everything is loaded.
|
||||
Analyzing the similarities and the differences between the video and the 3D scenarios as well as having knowledge about video streaming literature is~\todo{is key or are key?} key to developing an efficient 3D streaming system.
|
||||
Contrary to what one might think, the video streaming setting and the 3D streaming setting share many similarities: at a higher level of abstraction, both systems allow a user to access remote content without having to wait until everything is loaded.
|
||||
Analyzing the similarities and the differences between the video and the 3D scenarios as well as having knowledge about video streaming literature are the key to developing an efficient 3D streaming system.
|
||||
|
||||
\subsection{Chunks of data}
|
||||
|
||||
In order to be able to perform streaming, data needs to be segmented so that a client can request chunks of data and display it to the user while requesting another chunk.
|
||||
In video streaming, data chunks typically consist in a few seconds of video.
|
||||
In mesh streaming, some progressive mesh approaches encode a base mesh that contains low resolution geometry and textures and different chunks that increase the resolution of the base mesh.
|
||||
Otherwise, a mesh can also be segmented by separating geometry and textures, creating chunks that contain some faces of the model, or some textures.
|
||||
Otherwise, a mesh can also be segmented by separating geometry and textures, creating chunks that contain some faces of the model, or some other chunks containing textures.
|
||||
|
||||
\subsection{Data persistence}
|
||||
|
||||
@@ -36,7 +36,8 @@ It can be chosen directly by the user or automatically determined by analysing t
|
||||
\caption{The different resolutions available for a Youtube video}
|
||||
\end{figure}
|
||||
|
||||
In the same way, recent work in 3D streaming have proposed many ways to progressively streaming 3D models, displaying a low resolution to the user without latency, and supporting interaction with the model while the details are being downloaded.
|
||||
Similarly, recent work in 3D streaming have proposed different ways to progressively streaming 3D models, displaying a low resolution to the user without latency, and supporting interaction with the model while the details are being downloaded.
|
||||
Such strategies are reviewed in Section~\ref{sote:3d-streaming}.
|
||||
|
||||
\subsection{Media types}
|
||||
|
||||
@@ -45,15 +46,15 @@ In video, those media typically are images, sounds, and eventually subtitles, wh
|
||||
In both cases, an algorithm for content streaming has to acknowledge those different media types and manage them correctly.
|
||||
|
||||
In video streaming, most of the data (in terms of bytes) is used for images.
|
||||
Thus, the most important thing a video streaming system should do is optimize the image streaming.
|
||||
That's why, on a video on Youtube for example, there may be 6 resolutions for images (144p, 240p, 320p, 480p, 720p and 1080p) but only 2 resolutions for sound.
|
||||
Thus, the most important thing a video streaming system should do is to optimize images streaming.
|
||||
That is why, on a video on Youtube for example, there may be 6 resolutions for images (144p, 240p, 320p, 480p, 720p and 1080p) but only 2 resolutions for sound.
|
||||
This is one of the main differences between video and 3D streaming: in a 3D scene, geometry and texture sizes are approximately the same, and leveraging between those two types of content is a key problem.
|
||||
|
||||
\subsection{Interaction}
|
||||
|
||||
The ways of interacting with the content is probably the most important difference between video and 3D.
|
||||
In a video interface, there is only one degree of freedom: time.
|
||||
The only thing a user can do is let the video play itself, pause or resume it, or jump to another moment in the video.
|
||||
The only thing a user can do is letting the video play, pausing, resuming, or jumping to another time in the video.
|
||||
Even though these interactions seem easy to handle, giving the best possible experience to the user is already challenging. For example, to perform these few actions, Youtube provides the user with multiple options.
|
||||
|
||||
\begin{itemize}
|
||||
@@ -62,22 +63,22 @@ Even though these interactions seem easy to handle, giving the best possible exp
|
||||
\begin{itemize}
|
||||
\item click the video;
|
||||
\item press the \texttt{K} key;
|
||||
\item press the space key if the video is focused by the browser.
|
||||
\item press the space key.
|
||||
\end{itemize}
|
||||
|
||||
\item To navigate to another moment of the video, the user can:
|
||||
\item To navigate to another time in the video, the user can:
|
||||
\begin{itemize}
|
||||
\item click the timeline of the video where they wants;
|
||||
\item press the left arrow key to move 5 seconds backwards;
|
||||
\item press the right arrow key to move 5 seconds forwards;
|
||||
\item press the \texttt{J} key to move 10 seconds backwards;
|
||||
\item press the \texttt{L} key to move 10 seconds forwards;
|
||||
\item press one of the number key (on the first row of the keyboard, below the function keys) to move the corresponding decile of the video.
|
||||
\item press one of the number key (on the first row of the keyboard, below the function keys, or on the numpad) to move the corresponding decile of the video.
|
||||
\end{itemize}
|
||||
|
||||
\end{itemize}
|
||||
|
||||
There are even ways of controlling the other options, for example, \texttt{F} puts the player in fullscreen mode, up and down arrows change the sound volume, \texttt{M} mutes the sound and \texttt{C} activates the subtitles.
|
||||
There are also controls for other options: for example, \texttt{F} puts the player in fullscreen mode, up and down arrows change the sound volume, \texttt{M} mutes the sound and \texttt{C} activates the subtitles.
|
||||
All the interactions are summed up in Figure~\ref{i:youtube-keyboard}.
|
||||
|
||||
\newcommand{\relativeseekcontrol}{LightBlue}
|
||||
@@ -266,8 +267,8 @@ All the interactions are summed up in Figure~\ref{i:youtube-keyboard}.
|
||||
Those interactions are different if the user is using a mobile device.
|
||||
|
||||
\begin{itemize}
|
||||
\item To pause a video, the user must touch the screen once to make the timeline and the buttons appear and once on the pause button at the center of the screen.
|
||||
\item To resume a video, the user must touch the play button at the center of the screen.
|
||||
\item To pause a video, the user touches the screen once to make the timeline and the buttons appear and once on the pause button at the center of the screen.
|
||||
\item To resume a video, the user touches the play button at the center of the screen.
|
||||
\item To navigate to another moment of the video, the user can:
|
||||
\begin{itemize}
|
||||
\item double touch the left of the screen to move 5 seconds backwards;
|
||||
@@ -279,8 +280,8 @@ When it comes to 3D, there are many approaches to manage user interaction.
|
||||
Some interfaces mimic the video scenario, where the only variable is the time and the camera follows a predetermined path on which the user has no control.
|
||||
These interfaces are not interactive, and can be frustrating to the user who might feel constrained.
|
||||
|
||||
Some other interfaces add 2 degrees of freedom to the previous one: the user does not control the position of the camera but they can control the angle. This mimics the scenario of the 360 video.
|
||||
This is typically the case of the video game \emph{nolimits 2: roller coaster simulator} which works with VR devices (oculus rift, HTC vive, etc.) where the only interaction the user has is turning the head.
|
||||
Some other interfaces add 2 degrees of freedom to the timeline: the user does not control the position of the camera but they can control the angle. This mimics the scenario of the 360 video.
|
||||
This is typically the case of the video game \emph{nolimits 2: roller coaster simulator} which works with VR devices (oculus rift, HTC vive, etc.) where the only interaction the user has is turning their head.
|
||||
|
||||
Finally, most of the other interfaces give at least 5 degrees of freedom to the user: 3 being the coordinates of the position of the camera, and 2 being the angle (assuming the up vector is unchangeable, some interfaces might allow that, giving a sixth degree of freedom).
|
||||
The most common controls are the trackball controls where the user rotate the object like a ball \href{https://threejs.org/examples/?q=controls\#misc_controls_trackball}{(live example here)} and the orbit controls, which behave like the trackball controls but preserving the up vector \href{https://threejs.org/examples/?q=controls\#misc_controls_orbit}{(live example here)}.
|
||||
@@ -289,12 +290,12 @@ These controls are typically used in shooting video games, the mouse rotates the
|
||||
|
||||
\subsection{Relationship between interface, interaction and streaming}
|
||||
|
||||
In both video and 3D systems, streaming affects the interaction.
|
||||
For example, in a video streaming scenario, if a user sees that the video is fully loaded, they might start moving around on the timeline, but if they see that the streaming is just enough to not stall, they might prefer staying peaceful and just watch the video.
|
||||
In both video and 3D systems, streaming affects interaction.
|
||||
For example, in a video streaming scenario, if a user sees that the video is fully loaded, they might start moving around on the timeline, but if they see that the streaming is just enough to not stall, they might prefer not interacting and just watch the video.
|
||||
If the streaming stalls for too long, the user might seek somewhere else hoping for the video to resume, or get frustrated and leave the video.
|
||||
The same types of behaviour occur in 3D streaming: if a user is somewhere in a scene, and sees more data appearing, they might wait until enough data has arrived, but if they sees nothing happens, they might leave to look for data somewhere else.
|
||||
The same types of behaviour occur in 3D streaming: if a user is somewhere in a scene, and sees more data appearing, they might wait until enough data has arrived, but if they see nothing happens, they might leave to look for data somewhere else.
|
||||
|
||||
Those examples show how streaming can affect the interaction, but the interaction also affects the streaming.
|
||||
Those examples show how streaming can affect interaction, but interaction also affects streaming.
|
||||
In a video streaming scenario, if a user is watching peacefully without interacting, the system just has to request the next chunks of video and display them.
|
||||
However, if a user starts seeking at a different time of the streaming, the streaming would most likely stall until the system is able to gather the data it needs to resume the video.
|
||||
Just like in the video setup, the way a user navigates in a networked virtual environment affects the streaming.
|
||||
|
||||
Reference in New Issue
Block a user