VBSS SECURITY BEYOND PASSWORD
VIEW THIS SITE IN FLASH
Security System are defined as "automated methods of identifying or
authenticating the identity of a living person based on a physical or
behavioral characteristic." Unique physical traits, such as
fingerprints, iris scans, voice prints, faces, signatures or the geometry
of the hand can be used.
Voice can combine what people say with the way that they say it, which
means that it provides two-factor authentication in a single action.
Fingerprints, iris scans, retina scans, and face recognition can all
produce biometric identification (what you are), but something else is
needed to provide a second and more secure authentication factor
(something you know, for instance). Not only can voice combine two
factors, but it can also do it more efficiently.
Voice Based Security Systems uses a person's voice print to uniquely
identify individuals using biometric speaker verification technology.
Speech is processed through a non-contact method; you do not need to see
or to touch the person to be able to recognize them."Biometric
technologies - those that use..voice.. - will be the most important IT
innovations of the next several years. -Bill Gates at Gartner Group
Today password is going to be the major security factor which is prone
to hackers. The password is not sufficient enough today to secure
The security system based on the voice will analyze the pitch and word
and compares it with the data inside the system and recognizes the user
his own data. Personal information like the banking passwords, Bank
amount details, Intelligence Data, Criminal data, Case Files of the
Critical Defense information etc, can be stored based on the Voice data.
In this project we present two voice-to-phoneme conversion
algorithms that extract voice-tag abstractions
for speaker independent voice-tag
applications in embedded platforms, which are very sensitive to memory
and CPU consumptions. In the first approach, a voice-to-phoneme
in batch mode manages this task by preserving the commonality of input
feature vectors of multiple voice-tag example utterances.
Given multiple example utterances, a developed feature combination
strategy produces an "average utterance, which is converted to phonetic
strings as a voice-tag representation via a speaker-independent phonetic
decoder. In the second approach, a sequential voice-to-phoneme
conversion algorithm uncovers the hierarchy of phonetic consensus
embedded among multiple phonetic hypotheses generated by a
phonetic decoder from multiple example utterances of a voice-tag. The
most relevant phonetic hypotheses are then chosen to represent the