Tuesday, July 12, 2011

speech recognition

The way we process and interpret speech is largely dependent on the neuro-anatomy of the brain. Speech signals must travel from lower to higher regions before something resembling someone speaking can occur. Sound waves enter the ear canal where they are first broken down into their component-frequencies or ‘tones’. Individual tones are then converted to signals that get transmitted, over auditory pathways, to higher centers of the brain responsible for processing and synthesizing complex signals of speech such as phonemes, which are essentially complex bursts of multiple frequencies [link]. After completing sufficient cycles of phonemic synthesis - the phonic representation of a word is formed. Compound signals representing word-sounds are then passed to higher centers of the auditory cortex (Wernicke’s area) where word-meaning is retrieved from areas in the cerebral cortex where semantic processing is performed.

No comments: