Showing posts with label bayes theorem. Show all posts
Showing posts with label bayes theorem. Show all posts

Tuesday, December 15, 2015

Bayesian AI

Bayesian Program Learning BPL: probability-based program that is able to deal with variation during recognition tasks –like the ability to recognize a Segue from different angles or embedded in scenes of differing complexity, after seeing it only once in a magazine. Traditional AI required storing many occurrences of an object or event ( E ) and many occurrences of ‘not E or ‘variations of E’. In order to correctly identify E under different conditions.

                  E plus ( €, ∑, £, Æ, È, Ę, Ǝ, Ǯ, ʒ, Ξ, Σ, З, Э, Ѥ, Ѱ, ₣, ₤ ,€, 8)

  •   A 3-year old can correctly identify an event ( E ) with 95% accuracy after seeing it only once – an example of single-trial learning.
  •   Traditional AI can identify event ( E ) with only 75% accuracy with only one stored     occurrence. P(H/D) where event ( E ) is the hypothesis (H) and (D) is a stored data-point.
  •   Bayesian AI can identify an event ( E ) with 95% accuracy after one occurrence.
Bayes recognition is probabilistic –it compares the probabilities of getting E from different variations and contexts of ( E ) and returns the highest value P(D/H). Bayes probability comes closer to human recognition memory during single trial learning. (LA Times)

Friday, July 29, 2011

Sensory sampling

Rubén Moreno-Bote, David C. Knill, Alexandre Pouget, Bayesian sampling in visual perception PNAS July 26, 2011 vol. 108 no. 30 12491-1249.
Abstract: It is well-established that some aspects of perception and action can be understood as probabilistic inferences over underlying probability distributions. In some situations, it would be advantageous for the nervous system to sample interpretations from a probability distribution rather than commit to a particular interpretation. In this study, we asked whether visual percepts correspond to samples from the probability distribution over image interpretations, a form of sampling that we refer to as Bayesian sampling. To test this idea, we manipulated pairs of sensory cues in a bistable display consisting of two superimposed moving drifting gratings, and we asked subjects to report their perceived changes in depth ordering. We report that the fractions of dominance of each percept follow the multiplicative rule predicted by Bayesian sampling. Furthermore, we show that attractor neural networks can sample probability distributions if input currents add linearly and encode probability distributions with probabilistic population codes [link].

Sunday, July 17, 2011

Bayesian inference

Bayesian inference is a function of Bayesian probability. Bayesian probability is a measure of the likelihood of a desired outcome (H) (Colts winning the playoffs for instance) based on the conditional probabilities computed for a set of event-sequences (D) that would lead to the desired outcome ..adjusted for (divided-by) the conditional probabilities computed for the set of event-sequences leading to other possible outcomes (Hi ) (Dallas Giants or Eagles winning the playoffs).
Bayesian inference may be native to the way people make judgments. At the level of sensory processing, studies show that the nervous system perpetually distinguishes the most relevant signals, from incidental/peripheral signals, using likelihood estimates of a Bayesian sort. Signals that are the most likely outcome of ongoing activity, based on the contents of working memory, are given a boost. Signals considered less likely are held in abeyance and immediately suppressed if subsequent events do nothing to rehabilitate them.
In Bayesian terms, where H is the candidate signal and D is the current state of sensory memory, then the probability that H will be the winning candidate or P(D|H) ..is a function of sensory memory (D) mitigated by P(D|Hi) ..or the probability that the contents of sensory memory might favor other winning candidates (Hi ).