Skip to main content

How to improve the traditional ASR using Connectionist Temporal Classification

The traditional Automatic Speech Recognition (ASR) performs at about 85% accuracy rate.  At this rate, ASR users are often frustrated with the experience with using such a system.

The tradition ASR is often fragile:

1) requires extensive modification of parameters, just to make it work.
2) requires extensive understanding of a language model and a acoustic model.
3) doesn't scale well to multiple languages.
4) hyper-sensitive to speaker variants.


Deep Learning on the acoustic model has been introduced, but not much of gain in the accuracy.

What if, we can do a DL from end to end?








Connectionist Temporal Classification (2006) introduces an idea of using FFT on the frequency of a recording of a voice command and constructs a spectrogram at 8kHz.  At each spectrogram interval, a DL neural network can be assigned, individually.



The basic idea is to have RNN output neurons to encode distribution over "symbols".

The traditional ASR uses a phoneme-based model or a graphme-based model.  Again, suspectible to speaker variants.  e.g. If a speaker speaks slowly 'hello' over 10 seconds or 5 seconds, how do we map each phoneme to a neuron?

CTC allows the temporaral mapping to each DNN/RNN neuron by using softmax on top of a dense layer to provide the best possibility model. 





Using DNN/RNN, train the model over many many days on a high-end compute machine, we were able to have the model to transcribe a voice recording in English.  Our accuracy is around 92%, and it is not sensitive to speaker variants.

Trainging Deep Speech Recognition (NLP) is tricky.  The idea is to use SortaGrad (Bengio et al, ICML 2009)





References

• Gales and Young. “The Applica,on of Hidden Markov Models in Speech Recogni,on” Founda,ons and Trends in Signal Processing, 2008. 
• Jurafsky and Mar,n. “Speech and Language Processing”. Pren,ce Hall, 2000. 
• Bourlard and Morgan. “CONNECTIONIST SPEECH RECOGNITION: A Hybrid Approach”. Kluwer Publishing, 1994. 
• A Graves, S Fernández, F Gomez, J Schmidhuber. “Connec,onist temporal classifica,on: labelling unsegmented sequence data with recurrent neural networks.” ICML, 2006. 
• Hannun, Maas, Jurafsky, Ng. “First-­‐Pass Large Vocabulary Con,nuous Speech Recogni,on using Bi-­‐Direc,onal Recurrent DNNs” ArXiv: 1408.2873 
• Hannun, et al. “Deep Speech: Scaling up end-­‐to-­‐end speech recogni,on”. ArXiv:1412.5567 
• H. Hermansky, "Perceptual linear predic,ve (PLP) analysis of speech", J. Acoust. Soc. Am., vol. 87, no. 4, pp. 1738-­‐1752, Apr. 1990. 
• H. Hermansky and N. Morgan, "RASTA processing of speech", IEEE Trans. on Speech and Audio Proc., vol. 2, no. 4, pp. 578-­‐589, Oct. 1994. 
• H. Schwenk, “Con,nuous space language models”, 2007. 







Comments

Popular posts from this blog

How to train a neural network to retrieve 3D maps from videos

This blog is about how to train a neural network to extract depth maps from videos of moving people captured with a monocular camera. Note: With a monocular camera, extracting the depth map of moving people is difficult.  Difficulty is due to the motion blur and the rolling shutter of an image.  However, we can overcome these limitations by predicting the depth maps by the model trained with a generated dataset using SfM and MVS from the normalized videos. This normalized dataset can be the basis of the training set for the neural network to automatically extract the accurate depth maps from a typical video footage, without any further assistance from a MVS. To start this project with a SfM and a MVS, we will use TUM Dataset. So, the basic idea is to use SfM and Multiview Stereo to estimate depth, while serves as supervision during training. The RGB-D SLAM reference implementation from these papers are used: - RGB-D Slam (Robotics OS) - Real-time 3D Visual SLAM ...

How to project a camera plane A to a camera plane B

How to Create a holographic display and camcorder In the last part of the series "How to Create a Holographic Display and Camcorder", I talked about what the interest points, descriptors, and features to find the same object in two photos. In this part of the series, I'll talk about how to extract the depth of the object in two photos by calculating the disparity between the photos. In order to that, we need to construct a triangle mesh between correspondences. To construct a mesh, we will use Delaunnay triagulation.  Delaunnay Triagulation - It minimizes angles of all triangles, while the sigma of triangles is maximized. The reason for the triangulation is to do a piece wise affine transformation for each triangle mapped from a projective plane A to a projective plane B. A projective plane A is of a camera projective view at time t, while a projective plane B is of a camera projective view at time t+1. (or, at t-1.  It really doesn't matter)...

Calibrating sensors on a L2 autonomous vehicle

In this blog, I will discuss how to calibrate a suite of sensors used in a L2 autonomous prototype vehicle. Note: - To ensure a dataset generated from a L2 autonomous prototype, calibrate all sensors on board per each trip. In our autonomous vehicle prototypes, we use 6 - Cameras - we use 6 cameras - cropped native resolution from 1600x900 to smaller images - are in native BGR format. - with auto-exposure with a maximum limit of 20 ms. - use Bayer8 format for 1 byte per pixelin encoding - 1/1.8" CMOS sensor for 12 Hz capture frequency. - positions:   - one front center camera   - one front side mirrow camera per side   - one rear center camera   - one rear door centered camera per side 5 - Long Range RADARs - we use 5 sensors of RADAR - @ 13Hz capture frequency at 77 Ghz - measures distance & velocity, independently, in one cycle - positions:   - one front bumper center radar   - one front side mirror radar per side   - one...