Showing posts with label Linguistics. Show all posts
Showing posts with label Linguistics. Show all posts

Tuesday, December 15, 2015

Bayesian AI

Bayesian Program Learning BPL: probability-based program that is able to deal with variation during recognition tasks –like the ability to recognize a Segue from different angles or embedded in scenes of differing complexity, after seeing it only once in a magazine. Traditional AI required storing many occurrences of an object or event ( E ) and many occurrences of ‘not E or ‘variations of E’. In order to correctly identify E under different conditions.

                  E plus ( €, ∑, £, Æ, È, Ę, Ǝ, Ǯ, ʒ, Ξ, Σ, З, Э, Ѥ, Ѱ, ₣, ₤ ,€, 8)

  •   A 3-year old can correctly identify an event ( E ) with 95% accuracy after seeing it only once – an example of single-trial learning.
  •   Traditional AI can identify event ( E ) with only 75% accuracy with only one stored     occurrence. P(H/D) where event ( E ) is the hypothesis (H) and (D) is a stored data-point.
  •   Bayesian AI can identify an event ( E ) with 95% accuracy after one occurrence.
Bayes recognition is probabilistic –it compares the probabilities of getting E from different variations and contexts of ( E ) and returns the highest value P(D/H). Bayes probability comes closer to human recognition memory during single trial learning. (LA Times)

Wednesday, February 13, 2013

Bilingual advantage

“If you walk into a room, where a million things can attract your attention, how does your mind pay attention to what you need to pay attention to without getting distracted?”  [link]
Speaking multiple languages may be an advantage in more ways than one: a new study suggests that bilinguals are speedier task-switchers than monolinguals [link]. Task-switching has real-world applications ..the ability to mentally “switch gears” and refocus on new goals.It is a valuable skill that has numerous practical uses. You use it to shift attention from the wheel to the road while driving, or to switch gears between offense and defense in a team sport. Bilingualism has already been associated with a number of cognitive advantages, and now a 2010 study from Language and Cognition has investigated how bilingualism might enhance crucial task-switching skills in young adults  This study contributes to a growing body of evidence suggesting that bilinguals enjoy enhanced executive control compared to monolinguals. Executive control refers to a combination of cognitive abilities—including task-switching—that help you make decisions, control impulses, and plan thoughtfully. It’s long been thought that constant management and monitoring of two languages improves executive control—a belief that this Carnegie Mellon study supports.

Sunday, September 16, 2012

Network content

“The meaning of a sentence is derived from the original words by an active, interpretive process. The original sentence which is perceived is rapidly forgotten and the memory is then for the information (meaning) contained in the sentence.” ~ Jacqueline Sachs, 1967 
Apparently the effect of messages from our peers is greater than the content of the messages themselves. James Fowler at UCSD conducted a study to observe the influence of messages read on Facebook [ link ]. He found that messages from peers are more persuasive than purely informational messages. Thirty nine percent more recipients went to the polls and voted after receiving messages from friends reminding them to vote than those who received messages from ‘the sponsor’. This tells me that the power of a message doesn’t reside in the information (meaning) alone.  Or perhaps the social-value is an even greater source of information.

Saturday, June 16, 2012

Speech recognition

Theory has it that language development is an ‘innate biological process’. First we learn to segment a stream of sound into syllables and words, and then we begin extracting the rules of syntax needed to generate sentences. What’s amazing is that exposure to speech is all that’s required. No formal training is needed ..interaction in a verbal community is sufficient. Based on this theory, a humanoid named DeeChee was created to mimic the way infants learn to recognize syllables and words. It was also tuned to boost the prominence of words signalling encouragement. Starting from scratch, DeeChee was able to learn simple words in minutes by just having a conversation with someone [ link ]. One small conversation for a robot; a canticle of possibilities for mankind..

Sunday, June 03, 2012

A cure for Siri

Consider the phrase, “Man on first.” It doesn’t make much sense unless you know baseball. Or imagine a sign outside a store that reads, “Baby sale - One week only!” You easily infer that the store isn’t selling babies. Computers can’t do that. They haven’t mastered the pragmatic component of language yet .. information that is only available by knowing what social context prevails. However, Stanford psychologists have created a mathematical model that helps predict pragmatic reasoning [ link ]. This could allow computers to recognize when to apply commonly held social rules. Who knows, they may have just discovered a cure for the speech impediment suffered by Siri – a natural language interface for the Apple iPhone [ link ].

Friday, December 09, 2011

Inference making

“Your job as a reader is to use your imagination and analytical skills where the author has left off.”
Intentional fallacy: it’s not what an author means to say that’s important ..it’s how the reader interprets what they say. What they intended is subject to interpretation, which isn’t necessarily going to turn out the same. But if we’re the readers, our interpretation is what matters. Communication is mostly an interpretive process. We add our perspective and ingenuity to whatever we hear or read. Attempts by the writer to narrow it down are futile ..or sterile [link]. In Harry Potter, some may see Dumbledore as gay; others might view him as quirky and without a particular sexual identity. I'm reminded of the ghost in “Hamlet” and how little we really know about him. Is he the spirit of his murdered father asking to be avenged ..? Is he a hellish apparition sent to make Hamlet commit murder ..? Or, is he just a figment of Hamlet's imagination ..? And who really gives a shit now what Shakespeare meant ..?

Friday, November 18, 2011

Parsing Gabrielle

Notes made while watching interview with Diane Sawyer.
Her speech centers are still intact ..but some of the pathways that connect speech with concepts may have been severed. They show her a picture of a table and she comes up with words all right ..just not the right ones. She’s guessing and her therapy involves prompting her to narrow down the range of possibilities until she’s in the vicinity of ‘table-ness’. It is geared toward building alternate pathways to replace the one’s she lost. The connection between her lexicon (the place where words are stored) and semantic memory (memory for meaning) may be all that’s affected. Prognosis is good. She can read words from her lexicon OK. Her difficulty is connecting them with ideas in the mind. So it’s just a process of generating alternate pathways. I wonder if she can write or type in complete sentences. I wonder if there’s a way to prompt the language pathways of the brain to act with equipotentiality, same as they did during childhood, to help facilitate the regenerative process. Apparently music can help because it activates greater brain-area ..and she can sing the words she has difficulty coming up with on her own.  Spontaneously however, she doesn’t speak in full sentences yet. Her two word utterances show a ‘return to the kernal’ ..meaning she can express the main idea without the generating the phrase-structure necessary to produce a full sentence. Hopefully, she hasn’t lost the rules of grammar ..only the ability to pick-out the words to express them. 

Kernal: When asked if she wants to return to Congress, she relies: “No, better!”
Generative grammar: Two embedded verb phrases are required to turn the kernal “No, better!” into the sentence: “No, I want to get better first” 

Saturday, October 15, 2011

Network theory of communication

I have a theory that says whenever messages are transmitted between people in different locations; the accuracy of communication drops by 60%. I call it the ‘displacement theory of communication’ and it's an extension of findings in the field of human information-processing [link].
This drop in communication is wide-scale and can occur anywhere from cell phones to air traffic control systems. Messages are by nature incomplete and often assume knowledge of local conditions that aren’t available to the receiver. Without exacting protocols, like those developed in the air traffic control industry, incomplete messages are at best probabilistic and rely on the receiver to supply the most likely meaning intended. Since this is an innate function of human information-processing; it can happen quickly and imperceptibly. When it does, we are prone to making overconfident and faulty decisions about the most likely meaning intended. It has long been know that the most frequent decision we make during conversation is about the intention of others .. it’s also the one we get wrong most often. So, facebook users and text messagers ..beware! We are making the rules up as we go.

Tuesday, July 12, 2011

speech recognition

The way we process and interpret speech is largely dependent on the neuro-anatomy of the brain. Speech signals must travel from lower to higher regions before something resembling someone speaking can occur. Sound waves enter the ear canal where they are first broken down into their component-frequencies or ‘tones’. Individual tones are then converted to signals that get transmitted, over auditory pathways, to higher centers of the brain responsible for processing and synthesizing complex signals of speech such as phonemes, which are essentially complex bursts of multiple frequencies [link]. After completing sufficient cycles of phonemic synthesis - the phonic representation of a word is formed. Compound signals representing word-sounds are then passed to higher centers of the auditory cortex (Wernicke’s area) where word-meaning is retrieved from areas in the cerebral cortex where semantic processing is performed.

Monday, July 11, 2011

agents of expression


Most children learn to speak and understand what’s said effortlessly. It’s a spontaneous process that doesn’t require classroom training. The brain is innately tuned to extract the rules of spoken language. Observations show that parents rarely correct for rules of grammar during early childhood. However, they frequently correct for the rules of semantics ..making sure their children convey the proper idea [link]. That’s why it’s interesting for me to see that, while children may discover the correct rules of grammar on their own ..by adolescence they’re playing pretty loose with the rules of semantics they’d been taught. In other words, they frequently use well-formed sentences to fabricate and misrepresent what’s going on.

Sunday, May 15, 2011

Reading behavior

“Our universities deliver education in English ..[so] we should teach reading in the language that will be most useful.” Letter to the LATimes re. dual-language immersion ~ [link ]
As reasonable as this may sound ..it is not consistent with the way nature prepares children to read. Nor is it supported by the state-of-the-art in neuroscience and language development. The language children are going to need in college isn’t as important for reading education as their native language. Learning to read in one’s native language is the most effective route to fluency. That’s because learning to read starts out as a process of linking the sound of words on paper to their meaning in memory [link]. This puts children from non-English backgrounds at a disadvantage when trying to read English first. They have no ‘phonic memory’ for it. That’s what accounts for the high percentage of high school students in the U.S. who cannot read or write well. Furthermore, it is widely known that reading fluency in one language is easily transferable to another [link]. It only makes sense to teach children to read in a way that assures early success in one language and boosts their chances of future achievement in other languages.

Wednesday, March 23, 2011

Remembering David Rumelhart

“Language, like most knowledge, relies mainly on memory and is represented in the brain by a network of connected meaning.” ~ David Rumelhart [link]
While at the University of California, San Diego, David Rumelhart developed the ‘adaptive structural network’ model for both encoding and retrieving information in long-term memory. According to his model, information is stored in a database and retrieved by an active interpretive process. Storage is a process of construction from a sensory-base whereas retrieval is a process of re-construction from a conceptual base [link]. David’s contributions influenced my field of study; he informed the direction I took and the decisions I made. He will be missed.

Tuesday, March 22, 2011

Aphasia

Written in response to an article in the LATimes ~>[progressive aphasia]
No wonder we don’t know how to relieve aphasia, we still talk about it as though it were a speech problem. It’s actually a memory problem. What’s lost are the pathways that enable look-up and retrieval of words stored in memory. It only presents itself as a speech problem at early onset [link]. That’s why aphasia doesn’t lend itself to speech therapy. Treatments that focus on memory skills for word-retrieval are more helpful [link].
Symptoms: First, you have difficulty finding the right pronouns and names for things. They may escape the speaker entirely. Verb usage generally remains intact. “I can’t find the right world.” comes out instead of “I can’t find the right word.” “I’m going to the office.” in place of “I’m going to the store.” Homonyms or words that sound alike frequently get switched: “I’m going dental.” for “I’m going mental.” When I think of the all the steps that have to be performed in a fraction of a second and in the right sequence ..I’m surprised speech is possible at all. Even though speaking feels like a single, automatic process ..it’s by no means a  single skill. When you break it down, it looks something like this:

Idea ~>  Lexical    ~>     Context    ~>   Syntax     ~>     Speech
              Look-up &         Integation        Generation        Timing & Production
              Selection                                                                        

Tuesday, March 15, 2011

Psychology of facebook

Presented to the Santa Barbara Institute
for Consciousness Studies
Part two
Continued from [ part one ] below:   Now I want to talk about ‘discourse analysis’ and what it reveals about communication over social networks. Discourse analysis is the branch of psychology dealing the way people process information from what they hear and read. I think it’s telling. Face to face communication is a probabilistic event. Language is a relatively narrow band of communication that can only suggest what the speaker has in mind. This presents the listener with a range of possibilities. Communication is successful only when the listener infers the most likely meaning intended by the speaker. Ordinary conversation is generally successful because we have context to help guide us along. We rely on facial expressions, intonation, emphasis, location and other visual and auditory cues. However, where ordinary communication is probabilistic, text messaging is a crapshoot. Text is cryptic. Context is lost and we rely on memory to supply the missing cues. However, memory is fallible. Research in discourse processing has shown that the biggest piece of missing information we supply is the intention of the speaker ..and it’s their intention that we most often get wrong. We perceive threat where none was intended. Offense at what may have only been sarcasm. By nature, the flow of conscious experience is displaced over social networks. This simply means it occurs outside the context of our immediate situation. That’s the beauty of the Internet. It allows us to share experiences that are ‘displaced’ in time and space with users from all over the world. It also places a heavy burden on text comprehension, which is much less developed than speech comprehension in the language centers of the brain. I believe this will provide a rich source of field-observation for the study of human consciousness for years.

Sunday, February 27, 2011

Bilingual advantage

“If you walk into a room, where a million things can attract your attention, how does your mind pay attention to what you need to pay attention to without getting distracted?” Dr Ellen Bialystok [link]
How we manage to stay ‘tuned-in’ is one of the miracles of modern humanity. Although we take it for granted, not all species are so equipped. The ability to focus and quickly switch focus is part of a legacy system handed down from our ancestors [link]. Neuroscience locates this system in the prefrontal cortex. That’s the part of the brain responsible for focusing attention, ignoring distractions and holding different scenarios in mind while trying to stay clear. According to the Journal of Neurology [link], bilinguals have an advantage doing this that lasts a lifetime. People who can hold a mental narrative in two languages ..and choose the one that best expresses their thoughts in conversation, also excel at swiftly deciding what’s important in situations where they’re presented with relevant and irrelevant information.

Thursday, October 21, 2010

Quality of understanding

“The meaning of a sentence is derived from the original words by an active, interpretive process. The original sentence that is perceived is rapidly forgotten and memory is for the information (meaning) contained in the sentence” ~ Jacqueline Sachs [link].
For years, neuro-linguists have studied what remains after we hear somebody speak. What they’ve come up with is something that resembles a three-dimensional network inside of our head. The network is made up of propositions (coded events), scripts (a sequence of coded events) and associated images and feelings. Although part of the network is constructed from the original sentence ..most of it is supplied by the past experience of the listener. What we come away with is a feeling of resonance and familiarity, based largely on our own beliefs and experience ..and not necessarily the meaning intended by the speaker. These finding are consistent with the construction-integration model for narrative comprehension proposed by psychologist Walter Kintsch [link].

Thursday, September 02, 2010

Sensory orientation

Presented to the
Santa Barbara Institute for Consciousness Studies
It was interesting for me to see a recent study in neuroscience that supports my theory of reading comprehension [link] Bear with me while I try and explain (or you can duck out now and I won’t be offended). What they found is that working memory interacts with the senses in order to produce a stable view of our surroundings and reduce errors of perception. For one thing, it has to identify signals that are the result of actual sensory events and filter out extraneous signals that are produced by fluctuations inside the nervous system itself (like those caused by changes in activity levels, neurotransmitter concentrations, circadian rhythms, etc..). Neuroscientists refer to this as the ‘sensory orientation’ function [link]. The visual areas in the brain must distinguish changes in actual sensory events from changes in internal activity in order to follow the ‘genuine’ action. They claim that the brain makes this estimate based on principles of ‘Bayesian inference’, which are not much different than principles of ‘Pragmatic inference’. It works something like this: Incoming signals that are considered likely to occur, based on the contents of working memory, are given a boost. Signals considered less likely are held in abeyance and immediately suppressed if subsequent events don’t do anything to rehabilitate them.

Saturday, December 29, 2007

Recapitulation theory

Research proposal
Presented to the Seminar in Research Methods
The purpose of this experiment is to test the hypothesis that adults can learn languages as easy as children when the method of instruction simulates conditions found in early childhood.
Transcripts of early speech show a reliable trend. Language development occurs in stages that correspond to increasing degrees of derivational complexity. This means fewer and simpler transformational rules appear in children’s speech before larger sets of more complex rules begin to emerge. In addition, children learn their first language without formal training. It occurs spontaneously. There is no evidence of selective pressure for the development of well-formed sentences. It is an innate process that requires only participation in a verbal community.
  ~>[Read more]

Thursday, October 25, 2007

Second languages

Presented to the
Seminar in Learning Theory

Tribute to Noam Chomsky
A survey of the literature suggests that the same learning principles underlie both native and foreign languages. If the focus of instruction is on communicative intent, rather than phonological repetition, then learning a foreign language recapitulates the stages that children follow when learning their first language. Contrary to popular belief, adults may have an advantage over children. Chomsky’s review of Skinner’s ‘Verbal Behavior’ has been hailed as the most influential document in the history of psychology. Nowhere is this more evident than in the recent literature on language development ~>[Read more]