report modified

This commit is contained in:
Amandinella 2015-03-12 17:37:07 +01:00
parent 383f5fac4e
commit adfbec44e9
4 changed files with 56 additions and 19 deletions

View File

@ -5,7 +5,15 @@
/// \defgroup detectionandmatching Detection and Matching module
///
/// Contains all the functions to compute the keypoints and
/// match them on differents images
/// match them on differents images. The algorithm used
/// to detect the keypoints and compute the descriptors is BRISK.
/// The matches are computed with a brute force matcher provided
/// by OpenCV.
///
/// This program is meant to be used like this
/// \code
/// bin/DetectionAndMatching pathImg1 pathImg2 pathMask1 pathMask2
/// \endcode
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////

View File

@ -1,6 +1,6 @@
@article{brisk,
author = "Stefan, Leutenegger and Margarita, Chli and Roland, Y.Siegwart",
title = "BRISK : Binary Robust Invariant Scalable Keypoints",
title = "{BRISK} : Binary Robust Invariant Scalable Keypoints",
year = "2011",
}
@ -17,3 +17,10 @@
url = "http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html",
urldate = "2015-03-12"
}
@online{license,
author = "Open Source Initiative",
title = "The zlib/libpng License (Zlib)",
url = "http://opensource.org/licenses/Zlib",
urldate = "2015-03-12"
}

View File

@ -113,6 +113,7 @@
\input{chapters/risksManagement}
\input{chapters/actions}
\input{chapters/schedule}
\input{chapters/license}
\input{chapters/conclusion}

View File

@ -1,41 +1,62 @@
\subsection{Points detection and matching}
\subsubsection{Points detection}
\label{detection}
\subsubsection{Points detection}
As a first step the detection of keypoints was made by using SURF (Speeded-Up Robust Features) algorithm (provided by
OpenCV) because of its good
speed but this algorithm is computed in nonfree module which is not free to use in commercial
application. That is why we decided to choose the BRISK (Binary Robust Invariant Scalable Keypoints)
algorithm (also provided by OpenCV) is pretty close to the SIFT (Scale-invariant feature transform) algorithm.
Therefore the tests were made using SURF and BRISK algorithm but the final version is with BRISK as SURF is patented.
As the aera of interest in the images is the object, we provide a binary mask (computed during the segmentation) in input of
SURF and BRISK detectors.
SURF is a speed version of SIFT (see Figure \ref{siftSurf}). Indeed instead of approximate Laplacian of Gaussian (LoG) with Difference
of Gaussian for finding scale-space. SURF goes a little further and approximates LoG with Box Filter.
As a first step the detection of keypoints was made by using SURF (Speeded-Up Robust Features) algorithm (provided by
OpenCV) because of its good
speed but this algorithm is computed in nonfree module which is not free to use in commercial
application. That is why we decided to choose the BRISK (Binary Robust Invariant Scalable Keypoints)
algorithm (also provided by OpenCV) is pretty close to the SIFT (Scale-invariant feature transform) algorithm.
Therefore the tests were made using SURF and BRISK algorithm but the final version is with BRISK as SURF is patented.
As the aera of interest in the images is the object, we provide a binary mask (computed during the segmentation) in input of
SURF and B.
SURF is a speed version of SIFT (see Figure \ref{siftSurf}). Indeed instead of approximate Laplacian of Gaussian (LoG) with Difference
of Gaussian for finding scale-space. SURF goes a little further and approximates LoG with Box Filter.
A detailed explanation of these algorithms can be found with the references \cite{brisk} (BRISK), \cite{surf} (SURF) and
\cite{sift} (SIFT).
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{img/LapinSiftSurf}
\caption{\label{siftSurf}Points detection for SIFT and SURF algorithm}
\end{figure}
For BRISK (see Figure \ref{brisk3415}) points of interest are identified across both the image and scale dimensions using a saliency criterion.
In order to increase the speed of computation, keypoints are detected in octave layers of the image pyramid as well as
in layers in-between. The location and the scale of each keypoint are obtained in the continuous domain via quadratic function fitting.
A sampling pattern consisting of points lying on appropriately scaled concentric circles is applied at the neighborhood of each keypoint to retrieve
gray values. Finally, the oriented BRISK sampling pattern is used to obtain pairwise brightness comparison results which are
assembled into the binary BRISK descriptor. The BRISK constructor take in input three parameters which modify the results :
the thresh, the octave and the patternScale.
A detailed explanation of these algorithms can be found with the references \cite{brisk} (BRISK), \cite{surf} (SURF) and
\cite{sift} (SIFT).
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{img/brisk3415}
\caption{\label{brisk3415}Detected points with BRISK algorithm}
\end{figure}
\subsubsection{Points matching}
We did the points matching, using a brute force matcher provided by OpenCV and then applied filters to get rid of inaccurate points.
For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one.
The filters used are :
\begin{itemize}
\item symmetric filter : the matches found when we take the image\_1 as base need to be found when we take the image\_2 as base also.
\item order constraint : the position of each point is compared to each other in image\_1 and image\_2, if there is too much error these points
are deleted.
\item threshold filter : filter on the distance between the descriptors of the matching points.
\item threshold filter : filter on the distance between the descriptors of the matching points. This filter is not used with BRISK
detection because the results are quite good without it.
\item geometric filter : filter which use epipolar geometry, and the fundamental matrix to filter strange points.
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{img/LapinSymetricGeometric}
\caption{Points matching obtained after symmetric and geometric filtering}
\end{figure}