Skip to main content

What are the depth sensors?

How to Create a holographic display and camcorder

In the last part of the series, I talked about why the depth sensors may not be ideal for a consumer grade camcorder.

These depth sensors lack
  • Miniaturized form factor
  • Cost effectiveness 
  • Poor weather handling
  • Noticeable noise errors

Due to these limitations, the holographic display and camcorder will use other depth sensor alternatives.

What are the depth sensor alternatives?

Cameras


We can use one or more cameras.  When we use a camera or more, we can retrieve the depth information.

These camera configurations are
  • Monocular Camera
  • Stereoscopic Cameras
  • N-View Cameras


For the first prototype, we will limit our use case to indoors.

I haven't decided if I should use a monocular camera, stereoscopic cameras or n-view cameras.  This may largely decided by how much time I have available.  Likely, I will use all these camera configurations to compare and contrast the results over the design and the ease-of-use.

The camcorder should should record
  • A person
  • Indoors

What are the depth sensors?

A camera records a scene.
The scene is recorded in 2D.  That is, it has width and height.
There is no depth distance recorded with a camera.


A depth sensor records the z-axis distance to every depth point.

The z-axis distance is the depth distance.


  • It is the distance between the depth sensor emitter and the surface point of an object in the scene.

For example, imagine you shoot one single laser beam from the depth sensor emitter to some object. Let's say, it's a small cube box in the scene. 

When the laser beam hits some point on the surface of the small box, you should see only one laser beam point reflected on the surface of the box.  

This reflected point is the depth point. 

This depth point is reflected back to the depth sensory plane.  When it is reflected, the time of flight is measured to calculate the distance between the depth sensory emitter and the reflected surface point.

Now, image multiple laser beams hitting the surface of the box.   This means we can sample the surface distance from the laser beam emitters at each point. 

That is just one object.

What if we shoot many laser beams to all objects in the scene?

With this, we can sample the time of flight distances between all laser beams and the reflected depth points from all visible objects.



On the next part, I'll talk about How to use the cameras to retrieve the depth information.
After that, we can use the depth distance points to reconstruct a scene.



Comments

Popular posts from this blog

How to project a camera plane A to a camera plane B

How to Create a holographic display and camcorder In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about what the interest points, descriptors, and features to find the same object in two photos. In this part of the series, I'll talk about how to extract the depth of the object in two photos by calculating the disparity between the photos. In order to that, we need to construct a triangle mesh between correspondences. To construct a mesh, we will use Delaunnay triagulation.  Delaunnay Triagulation - It minimizes angles of all triangles, while the sigma of triangles is maximized. The reason for the triangulation is to do a piece wise affine transformation for each triangle mapped from a projective plane A to a projective plane B. A projective plane A is of a camera projective view at time t, while a projective plane B is of a camera projective view at time t+1. (or, at t-1.  It really doesn't matter)

How to create a holographic camcorder

Since the invention of a camcorder, we haven't seen much of advancement of a video camcorder. Sure, there are few interesting, new features like capturing video in 360 or taking high resolution 4K content. But the content is still in 2D and we still watch it on a 2D display. Have you seen the movie Minority Report (2002)? There is a scene where Tom Cruise is watching a video recording of his lost son in 3D or holographically. Here is a video clip of this scene. I have been waiting for the technological advancement to do this, but it's not here yet. So I decided to build one myself. In order to build a holographic video camcorder, we need two devices. 1) a video recorder - a recorder which captures the video content in 3D or holographically. 2) a video display - a display device which shows the recorded holographic content in 3D or holographically. Do we have a technology to record a video, holographically. Yes, we can now do it, and I'll e

Creating an optical computer

Creating an optical computer  Note on creating an optical computer.  What is Optical Computer? A laptop is a microchip based computer and uses electricity and transisters to compute. An optical computer uses photons to compute.  How does it compare to a typical laptop? A modern desktop computer has about 5 TFLOPS (5 x 10^16 floating calculations per second). With an optical computer, there is no limit in the calcuations per second.   Is an optical computer faster than a quantuam computer?  In 2016, the fastest known quantum computer has 2000 qubits, which is 1000 faster than 512 qubits.  With an optical computer, there is no artificial limitation like 2000 or 500 qubits.   What's the theoretical compute limit on an optical computer?  There is a limit of speed of light. For now, the only artificial limitation is how we design the first prototype.  How much electricity energy does it require?  The first POC should use less than 1000 W/hr.  Has there been any prior inventions or work