Tuesday, June 26, 2012

Refreshing the mind

EEG recordings from the visual cortex show that conscious experience is ‘periodically refreshed’ rather than ‘continuously updated’. Sensory memory persists long enough to bridge the gap [ link ]

Sunday, June 24, 2012

The feeling of discourse

An MRI study reveals that emotion, not fact-sharing, promotes social interaction and facilitates interpersonal understanding. What researchers discovered is that emotions ‘synchronize mental networks’ between individuals. Synchronized network activity focuses attention on shared experience and produces a common framework for understanding. Sharing other people’s emotional state during discourse enables us to perceive, experience and interpret what others say in a like manner ..without separation [ link ].

Saturday, June 16, 2012

Speech recognition

Theory has it that language development is an ‘innate biological process’. First we learn to segment a stream of sound into syllables and words, and then we begin extracting the rules of syntax needed to generate sentences. What’s amazing is that exposure to speech is all that’s required. No formal training is needed ..interaction in a verbal community is sufficient. Based on this theory, a humanoid named DeeChee was created to mimic the way infants learn to recognize syllables and words. It was also tuned to boost the prominence of words signalling encouragement. Starting from scratch, DeeChee was able to learn simple words in minutes by just having a conversation with someone [ link ]. One small conversation for a robot; a canticle of possibilities for mankind..

Tuesday, June 05, 2012

Sound mind

It’s well known that the brain receives information from the ears in an orderly fashion. Signals for different frequencies arrive in tonotopic order along the auditory cortex. This means high tones are processed at one end of the auditory cortex while low tones are processed toward the other end. The location where these signals arrive determines our perception of pitch [ link ]. What’s interesting is that this type of organization applies to other properties of sound as well. Synapses that quickly release transmitters provide us with information about the onset of sound, like the beat, while synapses that release neurotransmitters more slowly provide information on qualities like timbre that persist over the duration of a sound. What’s new? Researchers have found that the pathways carrying these different synapse types are not grouped randomly. Instead, like orchestra musicians sitting in their own sections, they are bundled together by the property of sound that they convey. Tonotopic organization is preserved here as well. This means that beat and timbre have their own locations in the brain contributing to the perception of sound and music  [ link ].

Sunday, June 03, 2012

A cure for Siri

Consider the phrase, “Man on first.” It doesn’t make much sense unless you know baseball. Or imagine a sign outside a store that reads, “Baby sale - One week only!” You easily infer that the store isn’t selling babies. Computers can’t do that. They haven’t mastered the pragmatic component of language yet .. information that is only available by knowing what social context prevails. However, Stanford psychologists have created a mathematical model that helps predict pragmatic reasoning [ link ]. This could allow computers to recognize when to apply commonly held social rules. Who knows, they may have just discovered a cure for the speech impediment suffered by Siri – a natural language interface for the Apple iPhone [ link ].