Under The Guidance Of: S K Biswal
Under The Guidance Of: S K Biswal
Under The Guidance Of: S K Biswal
Certificate
This is to certify that the seminar report entitled SPEECH
RECOGNITION submitted by Mr. PRAVIN KUMAR SAHOO,
Reg. No.:-0901288222 is an authentic work carried out by him at
NM INSTITUTE OF ENGINEERING & TECHNOLOGY under
my guidance. The matter embodied in this report has not been
submitted earlier for the award of any degree to the best of my
knowledge and belief.
Mr. S K BISWAL
Prof. L K SADHANGI
Seminar Guide
(H.O.D).( E.C.E.)
ACKNOWLEDGEMENT
We express our deep sense of gratitude to out institution NM INSTITUTE OF
ENGINEERING & TECHNOLOGY while providing an opportunity in fulfilling the most
cherished desire for reaching our goal.
We have great pleasure to express our more sincere thanks to seminar guide Mr. S.K BISWAL .
Without whose invaluable help throughout the semester, this seminar would not have been
successful, he has always encouraged us to work independently and allowed our ideas to flourish
unrestricted. Working under her guidance has been great pleasure.
We thank all friends and non teaching staff for their valuable time and help for the
completion of our seminar.
Sincerely,
PRAVINA KUMAR
SAHOO
Regd. No :0901288222
INDEX
Sr. No.
Object name
Page no.
1.
Introduction
2.
Speech Recognition
3.
10
4.
12
5.
Variation in Speech
16
6.
20
7.
21
8.
22
9.
23
10.
25
11.
Techniques in Vogue
12.
Benefits
13.
Drawbacks
14.
Conclusion
SPEECH RECOGNITION
INTRODUCTION
The computer revolution is now well advanced, but although we see a starting
preparation of computer machines in many forms of work people do, the domain of computers is
still significantly small because of the specialized training needed to use them and the lack of
intelligence in computer systems. In the history of computer science five generations have passed
by, each adding a new innovative technology that brought computers nearer and nearer to the
5
people. Now it is sixth generation, whose prime objective is to make computers more intelligent
i.e., to make computer systems that can think as humans. The fifth generation was aimed at using
conventional symbolic Artificial Intelligence techniques to achieve machine intelligence. Thus
failed. Statistical modeling and Neural Nets are really sixth generation. The goal of work in
Artificial Intelligence is to build the machines that perform tasks normally requiring human
intelligence. True, but speech recognition seeing and walking dont require intelligence, but
human perceptual ability and motor control. Speech Technology is now one of the major
significant scientific research fields under the broad domain of AI; indeed it is a major codomain of computer science, apart from the traditional linguistics and other disciplines that study
the spoken language.
SPEECH RECOGNITION
The days when you had to keep staring at the computer screen and frantically hit the key
or click the mouse for the computer to respond to your commands may soon be a things of past.
Today we can stretch out and relax and tell your computer to do your bidding. This has been
made possible by the ASR (Automatic Speech Recognition) technology.
The routes of this technology can be traced to 1968 when the term Information
Technology hadnt even been coined. Americans had only begun to realize the vast potential of
computers. In the Hollywood blockbuster 2001: a space odyssey. A talking listening computer
HAL-9000, had been featured which to date is a called figure in both science fiction and in the
6
world of computing. Even today almost every speech recognition technologist dreams of
designing an HAL-like computer with a clear voice and the ability to understand normal speech.
Though the ASR technology is still not as versatile as the imaginer HAL, it can nevertheless be
used to make life easier. New application specific standard products, interactive error-recovery
techniques, and better voice activated user interfaces allow the handicapped, computer-illiterate,
and rotary dial phone owners to talk to the computers. ASR by offering a natural human interface
to computers, finds applications in telephone-call centers, such as for airline flight information
system, learning devices, toys, etc..
Person
speaks
THE ROAD
ELECTRONIC
AL SIGNAL
TO HAL
TWO CAL
LANGUAE
ANALYSIS
BACKGRRE
MOVAL AND
SOUND
AMPLIFICOU
BREAK UP
WORDS INTO
PER
Road
Moo
Gall
THE
Load
To
Mall
A listeners ears and brain receive and process the analogue speech waveforms to figure
out the speech. ASR enabled computers, too, work under the same principle by picking up
acoustic cues for speech analysis and synthesis. Because it helps to understand the ASR
technology better, let us dwell a little more on the acoustic process of the human articulator
system. In the vocal tract the process begins from the lungs. The variations in air pressure cause
vibrations in the folds of skin that constitute the vocal chords. The elongated orifice between the
vocal chords is called the glottis. As a result of the vibrations, repeated bursts of compressed air
are released in to the air as sound waves.
Articulators in the vocal tract are manipulated by the speaker to produce various effects.
The vocal chords can be stiffened or relaxed to modify the rate of vibration, or they can be turned
off and the vibration eliminated while still allowing air to pass. The velum acts as a gate between
8
the oral and the nasal cavities. It can be closed to isolate or opened to couple the two cavities.
The tongue, jaw, teeth, and lips can be move to change the shape of the oral cavity.
The nature of sound preserve wave radiating out world from the lips depends upon this
time varying articulations and upon the absorptive qualities of the vocal tracts materials. The
sound pressure wave exists as a continually moving disturbance of air. Particles come move
closer together as the pressure increases or move further apart as it decreases, each influencing
its neighbor in turn as the wave propagates at the speed of sound. The amplitude to the wave at
any position, distant from the speaker, is measured by the density of air molecules and grows
weaker as the distance increases. When this wave falls upon the ear it is interpreted as sound
with discernible timbre, pitch, and loudness.
Air under pressure from the lung moves through the vocal tract and comes into contact
with various obstructions including palate, tongue, teeth, lips and timings. Some of its energy is
absorbed by these obstructions; most is reflected. Reflections occur in all directions so that parts
of waves bounce around inside the cavities for some time, blending with other waves, dissipating
energy and finally finding the way out through the nostrils or past the lips.
Some waves resonate inside the tract according to their frequency and the cavitys shape
at that moment, combining with other reflections, reinforcing the wave energy before exiting.
Energy in waves of other, non-resonant frequencies is attenuated rather than amplified in its
passage through the tract.
phoneme is matched against the sounds and converted into the appropriate character group. This
is where problems begin. To overcome the difficulties encountered in this phase, the program
uses numerous methods; firs, it checks and compares words that are similar in sound with what
they have heard; then it follows a system of language analyses to check if the language allows a
particular syllable to appear after another.
Then comes the grammar and language check. It tries to find out whether or not the
combination of word makes any senses. This is very similar to the grammar-check package that
you find in word processors.
The numerous words constituting the speech are finally noted down in the word.
Processor. While all speech recognition programs come with their own word processors, some
can work with other word processing packages like MS word and Word Perfect. In fact, OS/2
allows even operating command to be spoken.
Converting sounds into electrical signals: when we speak into microphone it converts
sound waves into electrical signals. In any machine that records or transmits human
voice, the sound wave is converted into an electrical signal using a microphone. When we
speak into telephone receiver, for instance, its microphone converts the acoustic wave
into an electrical analogue signal that is transmitted through the telephone network. The
electrical signals strength from the microphone varies in amplitude overtime and is
referred to as an analogue signal or an analogue waveform.
Background noise removal: the ASR programs removes all noise and retains the words
that you have spoken.
Breaking up words into phonemes: The words are broken down into individual sounds,
known as phonemes, which are the smallest sound units discernible. For each small
10
amount of time, some feature, value is found out in the wave. Likewise, the wave is
divided into small parts, called Phonemes.
4
Matching and choosing character combination: this is the most complex phase. The
program has big dictionary of popular words that exist in the language. Each Phoneme is
matched against the sounds and converted into appropriate character group. This is where
problem begins. It checks and compares words that are similar in sound with what they
have heard. All these similar words are collected.
Language analysis: here it checks if the language allows a particular syllable to appear
after another.
After that, there will be grammar check. It tries to find out whether or not the
combination of words any sense. That is there will be a grammar check package.
Finally the numerous words constitution the speech recognition programs come with their
own word processor, some can work with other word processing package like MS word
and word perfect.
VARIATIONS IN SPEECH
The speech-recognition process is complicated because the production of phonemes and
the transition between them varies from person to person and even the same person. Different
people speak differently. Accents, regional dialects, sex, age, speech impediments, emotional
state, and other factors cause people to pronounce the same word in different ways. Phonemes
are added, omitted, or substituted. For example, the word, America, is pronounced in parts of
New England as America. The rate of speech also varies from person the person depending upon
a persons habit and his regional background.
A word or a phrase spoken by the same individual differs from moment to moment
illness; tiredness, stress or other conditions cause subtle variations in the way a word is spoken at
different times. Also, the voice quality varies in accordance with the position of the person
relative to the microphone, the acoustic nature of the surroundings, or the quality of the
recording devices. The resulting changes in the waveform can drastically affect the performance
of the recognizer.
11
active vocabulary with each prompt usually provides faster more accurate results, similar
sounding words in vocabulary set cause recognition errors. But a unique sound for each word
enhances recognition engines accuracy.
result is language independence. You can say ja, si, or ya during training, as long as you are
consistent. The drawback is that the speaker-dependent system must do more than simply match
incoming speech to the templates. It must also include resources to create those templates.
WHICH IS BETTER?
For a given amount of processing power, a speaker dependent system tends to provide
more accurate recognition than a speaker-independent system. A speaker independent system is
not necessarily better: the difference in performance stems from the speaker independent
template encompassing wide speech variations.
TECHNIQUES IN VOGUE:
The most frequently used speech recognition technique involves template matching, in
which vocabulary words are characterized in memory a template time based sequences of
spectral information taken from waveforms obtained during training.
As an alternative to template matching, feature based designs have been used in which a
time sequence of the pertinent phonetic features is extracted from a speech waveform. Different
modeling approaches are used, but models involving state diagrams have been fond to give
encouraging performance. In particular, HMM (hidden Markov models) are frequently applied.
With HMMs any speech unit can be modeled, and all knowledge sources can be modeled, and all
knowledge sources and be included in a single, integrated model. Various types of HMMs have
been implemented with differing results. Some model each word in the vocabulary, while others
model sub word speech units.
models are called hidden Markov models precisely because the state sequence that produced the
observable output is not known- its hidden.
HMM is represented by a set of states, vectors defining transitions between certain pairs
of those states, probabilities that apply to state to state transitions, sets of probabilities
characterizing observed output symbols, and initial conditions.
An example is shown in the three state diagram 3 where states are denoted by nodes and
transitions by directed arrows (vectors) between nodes. The underlying model is a markov chain.
The circles represent states of the speakers vocal system specific configuration of tongue, lips,
etc that produce a given sound. The arrows represent possible transitions from one state to
another. At any given time, the model is said to be in one state. At clock time, the model might
change from its current state to any state to any state for which a transition vector exists.
Transition may occur only from the tail to the head of a vector. A state can have more than one
transition leaving it and more than one leading to it.
by you and then put them down. Though slow and tedious, it announced the arrival of speech
technologies on the desktop.
In the second step, we got continuous speech recognition. You no longer have to speak
out each word separately instead you can just talk naturally and the program understands what
you say.
The third step s speech understanding. This technology is the one that will actually mark
the biggest change in the way we use our computers. It will necessitate a complete overhaul of
our operation system, our word processors, our word processors, out spreadsheets just about
everything. It will also mark the emergence of a computer like HAL. When speech
understanding arrives in its true form, it will allow your computer to make sense of commands
like wake me up at six in the morning, you ghonchu. Till then we must only wait and watch.
16
EVALUATION
REMAINING PROBLEMS
hypotheses.
Out-of-Vocabulary (OOV) Words Systems must have some method of detecting OOV
BENIFITS
17
Security:
With this technology a powerful interface between man and computer is created as the
voice reorganization understands only the prerecorded voices and hence there are no ways of
tampering data or breaking the codes if created.
Productivity:
It decreases work as all operations are done through voice recognition and hence paper
work decreases to its maximum and the user can feel relaxed irrespective of the work.
DRAWBACKS:
If the system has to work under noisy environments, background noise may corrupt the
original data and leads to SSmisinterpretation.
If words that are pronounced similar for example, their, there, this technology face
difficulty in distinguishing them.
18
CONCLUSION:
Voice recognition promises rosy future and offer wide variety of services.
The next generation of voice recognition technology consists of something called
Neural networks using artificial intelligence technology. They are formed by
interconnected nodes which do parallel processing of the input for fast evaluation.
Like human beings they learn new pattern of speech automatically.
19