Did more shit
This commit is contained in:
parent
0ccd968900
commit
31f227b913
|
@ -792,3 +792,12 @@
|
|||
year={2010},
|
||||
publisher={Springer}
|
||||
}
|
||||
|
||||
@inproceedings{gaillard2015urban,
|
||||
title={Urban data visualisation in a web browser},
|
||||
author={Gaillard, J{\'e}r{\'e}my and Vienne, Alexandre and Baume, R{\'e}mi and Pedrinis, Fr{\'e}d{\'e}ric and Peytavie, Adrien and Gesqui{\`e}re, Gilles},
|
||||
booktitle={Proceedings of the 20th International Conference on 3D Web Technology},
|
||||
pages={81--88},
|
||||
year={2015},
|
||||
organization={ACM}
|
||||
}
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
\copied{}
|
||||
|
||||
\section{3D Bookmarks and Navigation Aids}
|
||||
|
||||
Devising an ergonomic technique for browsing 3D environments through a 2D interface is difficult.
|
||||
\fresh{}
|
||||
The only use for 3D streaming is to allow users interacting with the content while it is being downloaded.
|
||||
\copied%
|
||||
However, devising an ergonomic technique for browsing 3D environments through a 2D interface is difficult.
|
||||
Controlling the viewpoint in 3D (6 DOFs) with 2D devices is not only inherently challenging but also strongly task-dependent. In their review,~\citep{interaction-3d-environment} distinguish between several types of camera movements: general movements for exploration (e.g., navigation with no explicit target), targeted movements (e.g., searching and/or examining a model in detail), specified trajectory (e.g., a cinematographic camera path), etc.
|
||||
For each type of movement, specialized 3D interaction techniques can be designed.
|
||||
In most cases, rotating, panning, and zooming movements are required, and users are consequently forced to switch back and forth among several navigation modes, leading to interactions that are too complicated overall for a layperson.
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
\fresh{}
|
||||
\section{3D Streaming\label{sote:3d-streaming}}
|
||||
|
||||
In this thesis, our objective is to stream 3D scenes.
|
||||
Even though 3D streaming is not the most researched field, special attention has been brought to 3D content compression, in particular progressive compression which is a premise for 3D streaming.
|
||||
In the next sections, we review the 3D streaming related work, from 3D compression and structuring to 3D interaction.
|
||||
|
||||
\subsection{Compression and structuring}
|
||||
|
||||
The most popular compression model for 3D is progressive meshes: they were introduced in~\citep{progressive-meshes} and allow a progressive transmission of a mesh by sending a low resolution mesh first, called \emph{base mesh}, and then transmitting detail information that a client can use to increase the resolution.
|
||||
|
@ -66,16 +70,19 @@ After content preparation, the mesh consists in a base mesh and a sequence of pa
|
|||
Thus, a client can start by downloading the base mesh, display it to the user, and keep downloading and displaying details as time goes by.
|
||||
This process reduces the time a user has to wait before seeing something, and increases the quality of experience.
|
||||
|
||||
These methods have been vastly researched \citep{isenburg2006streaming,courbet2010streaming,bayazit20093,mamou2010shape}, but very few of these methods can handle meshes with attributes, such as texture coordinates.
|
||||
These methods have been vastly researched \citep{bayazit20093,mamou2010shape}, but very few of these methods can handle meshes with attributes, such as texture coordinates.
|
||||
|
||||
\citep{streaming-compressed-webgl} develop a dedicated progressive compression algorithm for efficient decoding, in order to be usable on web clients.
|
||||
\citep{streaming-compressed-webgl} develop a dedicated progressive compression algorithm based on iterative decimation, for efficient decoding, in order to be usable on web clients.
|
||||
With the same objective, \citep{pop-buffer} proposes pop buffer, a progressive compression method based on quantization that allows efficient decoding.
|
||||
|
||||
Following this, many approaches use multi triangulation, which creates mesh fragments at different levels of resolution and encodes the dependencies between fragments in a directed acyclic graph.
|
||||
\citep{batched-multi-triangulation} proposes a GPU optimized version of multi triangulation that pushes its performances to real time.
|
||||
\citep{batched-multi-triangulation} proposes Nexus: a GPU optimized version of multi triangulation that pushes its performances to real time.
|
||||
It is notably used in 3DHOP (3D Heritage Online Presenter, \citep{3dhop}), a framework to easily build web interfaces to present 3D models to users in the context of cultural heritage.
|
||||
|
||||
More recently, to answer the need for a standard format for 3D data, the Khronos group proposed a generic format called glTF (GL Transmission Format,~\citep{gltf}) to handle all types of 3D content representations: point clouds, meshes, animated model, etc.\
|
||||
Each of these approaches define its own compression and coding for a single mesh.
|
||||
However, users are frequently interested in scenes that contain many meshes, and the need to structure content emerged.
|
||||
|
||||
To answer those issues, the Khronos group proposed a generic format called glTF (GL Transmission Format,~\citep{gltf}) to handle all types of 3D content representations: point clouds, meshes, animated model, etc.\
|
||||
glTF is based on a JSON file, which encodes the structure of a scene of 3D objects.
|
||||
It contains a scene tree with cameras, meshes, buffers, materials, textures, animations an skinning information.
|
||||
Although relevant for compression, transmission and in particular streaming, this standard does not yet consider view-dependent streaming which is required for large scene remote visualisation.
|
||||
|
|
Loading…
Reference in New Issue