Skip to main content

How to use Convolution Neural Network to predict SIFT features



A feature locator is essential in all CV domain.  It's the basis of the germetric transformation, epipolar geometry, to 3D mesh reconstruction.

Many techniques - SIFT and other SLAM technologies, are available, but they require ideal environments to work in.

To address the short comings:

- sensitive to low texture environment
- sensitive to low light envonrment
- sensitive to high light environment (like outdoor day light with above 20k lux)
- and many other issues

I propose a CNN based neural network to detect 4 correspondences in an image A and an image B.

Since it is tricky to have a neural network to predict a 4x4 affine matrix of rotation and translation, I separated the translation vector from the rotation vector.

Basically, the ground truth data will be precalcalated with a generic SIFT with RANSAC to calculate the correspondences set P and P'.

The L2 (Eucledean) distance will be used between a predicted value.  They are 4 points, so an averaged will be used to calculate the delta beteen a predict P' and P'

Using Theano, a neural network was created and trained over few weeks.

The prediction errors were within 25% of the ground truth.

Further work:

I didn't have the confidence value calculated, but would like to add that in the prediction graph.  This means we should be using Cross Entropy instead Regression here.

Hardware:

- CPU:
  - Intel(R)  Core(TM)2Duo  CPU  E8500  @  3.16GHz
- Memory:
  - 2GB RAM
- GPU:
  - GeForce GTX 285
- BLAS:
  - Intel  Math  Kernel  Library,  version  10.2.4.032
- Compute:
  - CPU: double precision
  - GPU: single precisison





Comments

Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Calculating camera extrincs

Before we talk about the projection matrix of the depth correspondces, we need to know two things: - Camera extrinsics - Camera intrinsics Camera extrinsics maps the world coorinates to the camera coordinates. For the simplicity of the camera, it is a pinhole camera without lenses.  I'll talk about the lenses, the focal length, the lense aberation, the pixel sensor dimension, etc in Camera intrincs. So, locating an object in two images and projecting in the camera space is not that straight. But, it will be a straight process with the application of Machine Learning. I'll talk about the next part of the series in applying the deep neural network to optimizing the homographic projection and have it robust in low texture settings including low light. Deep Neural Network - Estimating Homography to address: - low texture environment - outside light conditions ( gamma > 2kLs) - robust as or better than SfM or other SLAM techniquese First, we need to locate the ...

How to improve the traditional ASR using Connectionist Temporal Classification

The traditional Automatic Speech Recognition (ASR) performs at about 85% accuracy rate.  At this rate, ASR users are often frustrated with the experience with using such a system. The tradition ASR is often fragile: 1) requires extensive modification of parameters, just to make it work. 2) requires extensive understanding of a language model and a acoustic model. 3) doesn't scale well to multiple languages. 4) hyper-sensitive to speaker variants. Deep Learning on the acoustic model has been introduced, but not much of gain in the accuracy. What if, we can do a DL from end to end? Connectionist Temporal Classification (2006) introduces an idea of using FFT on the frequency of a recording of a voice command and constructs a spectrogram at 8kHz.  At each spectrogram interval, a DL neural network can be assigned, individually. The basic idea is to have RNN output neurons to encode distribution over "symbols". The traditional ASR uses a phone...

How to train a neural network to retrieve 3D maps from videos

This blog is about how to train a neural network to extract depth maps from videos of moving people captured with a monocular camera. Note: With a monocular camera, extracting the depth map of moving people is difficult.  Difficulty is due to the motion blur and the rolling shutter of an image.  However, we can overcome these limitations by predicting the depth maps by the model trained with a generated dataset using SfM and MVS from the normalized videos. This normalized dataset can be the basis of the training set for the neural network to automatically extract the accurate depth maps from a typical video footage, without any further assistance from a MVS. To start this project with a SfM and a MVS, we will use TUM Dataset. So, the basic idea is to use SfM and Multiview Stereo to estimate depth, while serves as supervision during training. The RGB-D SLAM reference implementation from these papers are used: - RGB-D Slam (Robotics OS) - Real-time 3D Visual SLAM ...