Skip to main content

What are the depth sensors?

How to Create a holographic display and camcorder

In the last part of the series, I talked about why the depth sensors may not be ideal for a consumer grade camcorder.

These depth sensors lack
  • Miniaturized form factor
  • Cost effectiveness 
  • Poor weather handling
  • Noticeable noise errors

Due to these limitations, the holographic display and camcorder will use other depth sensor alternatives.

What are the depth sensor alternatives?

Cameras


We can use one or more cameras.  When we use a camera or more, we can retrieve the depth information.

These camera configurations are
  • Monocular Camera
  • Stereoscopic Cameras
  • N-View Cameras


For the first prototype, we will limit our use case to indoors.

I haven't decided if I should use a monocular camera, stereoscopic cameras or n-view cameras.  This may largely decided by how much time I have available.  Likely, I will use all these camera configurations to compare and contrast the results over the design and the ease-of-use.

The camcorder should should record
  • A person
  • Indoors

What are the depth sensors?

A camera records a scene.
The scene is recorded in 2D.  That is, it has width and height.
There is no depth distance recorded with a camera.


A depth sensor records the z-axis distance to every depth point.

The z-axis distance is the depth distance.


  • It is the distance between the depth sensor emitter and the surface point of an object in the scene.

For example, imagine you shoot one single laser beam from the depth sensor emitter to some object. Let's say, it's a small cube box in the scene. 

When the laser beam hits some point on the surface of the small box, you should see only one laser beam point reflected on the surface of the box.  

This reflected point is the depth point. 

This depth point is reflected back to the depth sensory plane.  When it is reflected, the time of flight is measured to calculate the distance between the depth sensory emitter and the reflected surface point.

Now, image multiple laser beams hitting the surface of the box.   This means we can sample the surface distance from the laser beam emitters at each point. 

That is just one object.

What if we shoot many laser beams to all objects in the scene?

With this, we can sample the time of flight distances between all laser beams and the reflected depth points from all visible objects.



On the next part, I'll talk about How to use the cameras to retrieve the depth information.
After that, we can use the depth distance points to reconstruct a scene.



Comments

Popular posts from this blog

How to improve the traditional ASR using Connectionist Temporal Classification

The traditional Automatic Speech Recognition (ASR) performs at about 85% accuracy rate.  At this rate, ASR users are often frustrated with the experience with using such a system. The tradition ASR is often fragile: 1) requires extensive modification of parameters, just to make it work. 2) requires extensive understanding of a language model and a acoustic model. 3) doesn't scale well to multiple languages. 4) hyper-sensitive to speaker variants. Deep Learning on the acoustic model has been introduced, but not much of gain in the accuracy. What if, we can do a DL from end to end? Connectionist Temporal Classification (2006) introduces an idea of using FFT on the frequency of a recording of a voice command and constructs a spectrogram at 8kHz.  At each spectrogram interval, a DL neural network can be assigned, individually. The basic idea is to have RNN output neurons to encode distribution over "symbols". The traditional ASR uses a phone...

How to train a neural network to retrieve 3D maps from videos

This blog is about how to train a neural network to extract depth maps from videos of moving people captured with a monocular camera. Note: With a monocular camera, extracting the depth map of moving people is difficult.  Difficulty is due to the motion blur and the rolling shutter of an image.  However, we can overcome these limitations by predicting the depth maps by the model trained with a generated dataset using SfM and MVS from the normalized videos. This normalized dataset can be the basis of the training set for the neural network to automatically extract the accurate depth maps from a typical video footage, without any further assistance from a MVS. To start this project with a SfM and a MVS, we will use TUM Dataset. So, the basic idea is to use SfM and Multiview Stereo to estimate depth, while serves as supervision during training. The RGB-D SLAM reference implementation from these papers are used: - RGB-D Slam (Robotics OS) - Real-time 3D Visual SLAM ...

Time of Flight Depth Sensor (ToF) - Pros and Cons

Pros - Lightweight - Full frame time-of-flight data (3D array) collected with a single laser pulse - Unambiguous direct calculation of range - Blur-free imager without motion distortion - Co-registeration of range and intensity for each pixel - Perfectly registered pixels within a frame - Ability to represent the camera-oblique objects - No precision scanning mechanism required - 3D flash LIDAR with 2D cameras (EO and IR) to combine 2D texture over 3D depth - Multiple 3D flash LIDAR cameras for full volumetric 3D scene - Lighter and smaller than point scanning systems - Non-moving parts - Lower power consumption - Ability to scan through range-gating, natural obscurants