How to Create a holographic display and camcorder
In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about how to use the cameras to calculate the disparity between photos. To do so, we have locate the objects in two photos, but this should be done by a machine.
In this part of the series, I'll talk about how to show a machine to locate objects in photos.
To calculate the depth information or the disparity of an object, we need to locate where the object is in each photo.
[Insert an illustration of an object and a camera by X-axis translation]
How to locate an object in each photo?
In each photo, we need to find the same object. Then, we should calculate the disparity between the object in the first photo and the object in the second photo.
So, how do we locate the same object in each photo?
Let's say we want to locate a tip of a cat's left ear in two photos. Each photo shows the same cat, but at the different location.
Can we teach a machine to recognize a cat in a photo, in general, by using the machine learning, but they do find them with about 70% confidence rate, and it takes time to process each image.
For the machine learning, I'll write a blog in another time.
We need a fast locating algorithm to find us the tip of the cat's ear, almost every time.
Using a gradient value between a pixel and its neighboring pixels?
Going back to the basics of image pixels, we can say, whenever there is a steep difference between the intensity value of the current pixel and the intensity values of the neighboring pixels.
This is called taking a gradient value between pixels.
This should give us nice locating points in an image?
There are multiple issues with this problem. In this sample, we are using the photos showing the one-axis translation.
But in reality, the object is a photo A may be at a different location entirely in a photo B. The object may look bigger, rotated or at a different angle.
So, if we taking the gradient values, they would give the same reliable locations, if the object in a photo B is now rotated.
Using a pixel corner as a locating point in an image
Shift Invariant Feature Transform (SIFT)
There are multiple locating feature algorithms to consider.
SIFT
FAST
HOG
SURF
GOLH
[Briefly explain each of these]
[Todo: Provide Machine Learning and Deep Learning algorithms in comparison in the future]
[Insert a intersection set of a set A and a set B is common among sets A and B]
How to extract the depth of an interest point in a photo?
To extract the depth of an interest point in a photo A in comparison with a photo B, we will use a method called Triangulation.
In the next part of the series, I'll talk about Triangulation.
In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about how to use the cameras to calculate the disparity between photos. To do so, we have locate the objects in two photos, but this should be done by a machine.
In this part of the series, I'll talk about how to show a machine to locate objects in photos.
To calculate the depth information or the disparity of an object, we need to locate where the object is in each photo.
[Insert an illustration of an object and a camera by X-axis translation]
How to locate an object in each photo?
In each photo, we need to find the same object. Then, we should calculate the disparity between the object in the first photo and the object in the second photo.
So, how do we locate the same object in each photo?
Let's say we want to locate a tip of a cat's left ear in two photos. Each photo shows the same cat, but at the different location.
Can we teach a machine to recognize a cat in a photo, in general, by using the machine learning, but they do find them with about 70% confidence rate, and it takes time to process each image.
For the machine learning, I'll write a blog in another time.
We need a fast locating algorithm to find us the tip of the cat's ear, almost every time.
Using a gradient value between a pixel and its neighboring pixels?
Going back to the basics of image pixels, we can say, whenever there is a steep difference between the intensity value of the current pixel and the intensity values of the neighboring pixels.
This is called taking a gradient value between pixels.
This should give us nice locating points in an image?
There are multiple issues with this problem. In this sample, we are using the photos showing the one-axis translation.
But in reality, the object is a photo A may be at a different location entirely in a photo B. The object may look bigger, rotated or at a different angle.
So, if we taking the gradient values, they would give the same reliable locations, if the object in a photo B is now rotated.
Using a pixel corner as a locating point in an image
To work with the rotation issue, we can use a L-shaped gradient. In any rotation, it is still a L. This is called an interest point.
How to find the same locating points in other photos?
Okay, so we locate multiple locating points in a photo A. How do we find the same interest points in a photo B?
Let's say we have an interest point X in a photo A.
And we want to find the same interest point X' in a photo B.
How do we know if they are indeed identical?
One of the ways is to take the neighboring pixel values of an interest point X and compare the neighboring pixel values of the interest point X'.
These neighboring pixel values of an interest point X is called a descriptor.
A locating point is called an interest point.
An interest point and the descriptor is called a feature.
- Interest Point
- A locating point is called an interest point.
- Descriptor
- The neighboring pixel values is a descriptor, for example.
- Feature
- A descriptor and an interest point is called a feature.
They are actually multiple ways to define a descriptor. For this project of creating a Holographic Display and a Holographic Camcorder, we have other options.
Shift Invariant Feature Transform (SIFT)
There are multiple locating feature algorithms to consider.
SIFT
FAST
HOG
SURF
GOLH
[Briefly explain each of these]
[Todo: Provide Machine Learning and Deep Learning algorithms in comparison in the future]
[Insert a intersection set of a set A and a set B is common among sets A and B]
How to extract the depth of an interest point in a photo?
To extract the depth of an interest point in a photo A in comparison with a photo B, we will use a method called Triangulation.
In the next part of the series, I'll talk about Triangulation.
Comments
Post a Comment