Skip to main content

Calibrating sensors on a L2 autonomous vehicle

In this blog, I will discuss how to calibrate a suite of sensors used in a L2 autonomous prototype vehicle.

Note:
- To ensure a dataset generated from a L2 autonomous prototype, calibrate all sensors on board per each trip.

In our autonomous vehicle prototypes, we use

6 - Cameras
- we use 6 cameras
- cropped native resolution from 1600x900 to smaller images
- are in native BGR format.
- with auto-exposure with a maximum limit of 20 ms.
- use Bayer8 format for 1 byte per pixelin encoding
- 1/1.8" CMOS sensor for 12 Hz capture frequency.
- positions:
  - one front center camera
  - one front side mirrow camera per side
  - one rear center camera
  - one rear door centered camera per side

5 - Long Range RADARs
- we use 5 sensors of RADAR
- @ 13Hz capture frequency at 77 Ghz
- measures distance & velocity, independently, in one cycle
- positions:
  - one front bumper center radar
  - one front side mirror radar per side
  - one rear door center radar per side


1 - LIDAR
- @ 20Hz capture frequency with 32 Channels
- 360 degrees - Horizontal FOV
- +/- 10 degrees to -30 degrees Vertical FOV
- range: 80m - 100m, but usuable at 70 m
- accurte: +/- 2cm
- upto 1.39 million points per second capture


Camera Calibration - Extrinsics
- use a cube-shaped object with known charuco patterns on three orthogonal planes.
- compute the transformation matrix from the camera to LIdAR by aligning the planes of the object.
- compute the camera to the ego frame transformation matrix from the LIDAR to the ego frame transformation.

note:
- the ego frame is at the rear vehicle axle's mid point.

Camera Clibartion - Intrinsics
- compute the intrinsics and the distortion parameters of the camera with a calibration  board with a know set of  patterns.

RADAR caliboration
- calibrate the yaw angle using a brute force approach to minimize the compensated range rates for static objects.

LIDAR calibration
- use a laseer liner to measure the relative location of the LIDAR to the ego frame.



Comments

Popular posts from this blog

How to project a camera plane A to a camera plane B

How to Create a holographic display and camcorder In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about what the interest points, descriptors, and features to find the same object in two photos. In this part of the series, I'll talk about how to extract the depth of the object in two photos by calculating the disparity between the photos. In order to that, we need to construct a triangle mesh between correspondences. To construct a mesh, we will use Delaunnay triagulation.  Delaunnay Triagulation - It minimizes angles of all triangles, while the sigma of triangles is maximized. The reason for the triangulation is to do a piece wise affine transformation for each triangle mapped from a projective plane A to a projective plane B. A projective plane A is of a camera projective view at time t, while a projective plane B is of a camera projective view at time t+1. (or, at t-1.  It really doesn't matter)...

How to create a holographic camcorder

Since the invention of a camcorder, we haven't seen much of advancement of a video camcorder. Sure, there are few interesting, new features like capturing video in 360 or taking high resolution 4K content. But the content is still in 2D and we still watch it on a 2D display. Have you seen the movie Minority Report (2002)? There is a scene where Tom Cruise is watching a video recording of his lost son in 3D or holographically. Here is a video clip of this scene. I have been waiting for the technological advancement to do this, but it's not here yet. So I decided to build one myself. In order to build a holographic video camcorder, we need two devices. 1) a video recorder - a recorder which captures the video content in 3D or holographically. 2) a video display - a display device which shows the recorded holographic content in 3D or holographically. Do we have a technology to record a video, holographically. Yes, we can now do it, and I'll e...

How to reduce TOF errors in AR glasses

In this blog, I will describe how we reduced the noise of the Time-Of-Flight sensor in our AR glasses prototype. Types of noise - systematic noise    note: caused by imperfect sinusoidal modulation - random noise    note: by shot noise. use bilateral filtering Motion artifacts reduction note: when motion is observed on a target object, we have motion artifacts observed in the tof sensor.  This happens when TOF measurement is recorded sequentially.  And, this causes doppler effects. fix: - use Plus and Minus rules    -- reference:        1) "Time of flight motion compensation revisited"  (2014)        2) "Time of flight cameras: Principles, Methods and Applications" (2012) Physics-based MPI reduction fix: - use 2K+1 frequency measurements for K inferencing paths in absence of noise. Per-pixel temporal processing of raw ToF measurements fix: - matrix pencil method - Prong's met...