Skip to main content

How to reduce TOF errors in AR glasses

In this blog, I will describe how we reduced the noise of the Time-Of-Flight sensor in our AR glasses prototype.

Types of noise
- systematic noise
   note: caused by imperfect sinusoidal modulation
- random noise
   note: by shot noise. use bilateral filtering

Motion artifacts reduction
note: when motion is observed on a target object, we have motion artifacts observed in the tof sensor.  This happens when TOF measurement is recorded sequentially.  And, this causes doppler effects.

fix:
- use Plus and Minus rules
   -- reference:
       1) "Time of flight motion compensation revisited"  (2014)
       2) "Time of flight cameras: Principles, Methods and Applications" (2012)


Physics-based MPI reduction

fix:
- use 2K+1 frequency measurements for K inferencing paths in absence of noise.


Per-pixel temporal processing of raw ToF measurements

fix:
- matrix pencil method
- Prong's method
- onthogonal matching method
- EPIRIT / MUSIC
- atomic norm regularization
- light transport model with sparse & low rank components
- phaser imaging

reference:
- "Signal processing for time-of-flight imaging sensors: An introduction to inverse problems in computational 3-d imaging" (2016)
- "Resolving multipath interference in kinetc: An inverse problem approach." (2016)
- "Recent advances in transient imaging: A computer graphics and vision perspective" (2017)
- "SRA: fast removal of general multipath for tof sensors." (2014)
- "Phasor imaging: A generalization of correlation-based time-of-flight imaging"  (2015)

Learning-based MPI reduction

fix:
- use an encoder to learn a mapping from captured ToF measurements to a feature representation of MPI corrupted path.
- combine it with a simulated, directed ToF measurements to train a decoder, so it can produce MPI corrected depth maps.

- use a KAKU robot and structured light to capture ToF measurements with registered GT depth.
- then, train two neural networks to correct depth and refine edges using geodesic filtering.

- use transient rendering to synthesize a training dataset with realistic shot noise.
- then, generate measurements from ToF sensors with random modulation path.

reference:
- ""DeepToF: Off the self real-time correction of multipath interference in time-of-flight imaging" (2017)
- "Automatic learning to remove multipath distortions in time-of-flight range images for a robotic arm setup." (2016)
- "Recent advances in transient imaging: A computer graphics and vision perspective" (2017)
- "A framework for transient rendering." (2014)


Additional notes will be added in the near future, as we make more progress.





Comments

Popular posts from this blog

How to project a camera plane A to a camera plane B

How to Create a holographic display and camcorder In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about what the interest points, descriptors, and features to find the same object in two photos. In this part of the series, I'll talk about how to extract the depth of the object in two photos by calculating the disparity between the photos. In order to that, we need to construct a triangle mesh between correspondences. To construct a mesh, we will use Delaunnay triagulation.  Delaunnay Triagulation - It minimizes angles of all triangles, while the sigma of triangles is maximized. The reason for the triangulation is to do a piece wise affine transformation for each triangle mapped from a projective plane A to a projective plane B. A projective plane A is of a camera projective view at time t, while a projective plane B is of a camera projective view at time t+1. (or, at t-1.  It really doesn't matter)

How to create a holographic camcorder

Since the invention of a camcorder, we haven't seen much of advancement of a video camcorder. Sure, there are few interesting, new features like capturing video in 360 or taking high resolution 4K content. But the content is still in 2D and we still watch it on a 2D display. Have you seen the movie Minority Report (2002)? There is a scene where Tom Cruise is watching a video recording of his lost son in 3D or holographically. Here is a video clip of this scene. I have been waiting for the technological advancement to do this, but it's not here yet. So I decided to build one myself. In order to build a holographic video camcorder, we need two devices. 1) a video recorder - a recorder which captures the video content in 3D or holographically. 2) a video display - a display device which shows the recorded holographic content in 3D or holographically. Do we have a technology to record a video, holographically. Yes, we can now do it, and I'll e

Creating an optical computer

Creating an optical computer  Note on creating an optical computer.  What is Optical Computer? A laptop is a microchip based computer and uses electricity and transisters to compute. An optical computer uses photons to compute.  How does it compare to a typical laptop? A modern desktop computer has about 5 TFLOPS (5 x 10^16 floating calculations per second). With an optical computer, there is no limit in the calcuations per second.   Is an optical computer faster than a quantuam computer?  In 2016, the fastest known quantum computer has 2000 qubits, which is 1000 faster than 512 qubits.  With an optical computer, there is no artificial limitation like 2000 or 500 qubits.   What's the theoretical compute limit on an optical computer?  There is a limit of speed of light. For now, the only artificial limitation is how we design the first prototype.  How much electricity energy does it require?  The first POC should use less than 1000 W/hr.  Has there been any prior inventions or work