Detecting a specific stochastic pattern embedded in an unknown background noise is a difficult pa... more Detecting a specific stochastic pattern embedded in an unknown background noise is a difficult pattern recognition problem. A similar problem appears when trying to detect a multi-neural spike pattern in a single electrical recording, embedded in the complex cortical activity of a behaving animal. The technical difficulty of this detection is due to the lack of a good statistical model
We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence pre... more We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence prediction in natural language based on prediction suffix trees (PSTs). By using efficient data structures, we extend the notion of PST to unbounded vocabularies. We also show how to use a Bayesian approach based on recursive priors over all possible PSTs to efficiently maintain tree mixtures. These mixtures have provably and practically better performance than almost any single model. We evaluate the model on several corpora. The low perplexity achieved by relatively small PST mixture models suggests that they may be an advantageous alternative, both theoretically and practically, to the widely used n-gram models.
Detecting a specific stochastic pattern embedded in an unknown background noise is a difficult pa... more Detecting a specific stochastic pattern embedded in an unknown background noise is a difficult pattern recognition problem. A similar problem appears when trying to detect a multi-neural spike pattern in a single electrical recording, embedded in the complex cortical activity of a behaving animal. The technical difficulty of this detection is due to the lack of a good statistical model
We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence pre... more We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence prediction in natural language based on prediction suffix trees (PSTs). By using efficient data structures, we extend the notion of PST to unbounded vocabularies. We also show how to use a Bayesian approach based on recursive priors over all possible PSTs to efficiently maintain tree mixtures. These mixtures have provably and practically better performance than almost any single model. We evaluate the model on several corpora. The low perplexity achieved by relatively small PST mixture models suggests that they may be an advantageous alternative, both theoretically and practically, to the widely used n-gram models.
The attentional blink (AB) effect is the reduced ability of subjects to report a second target st... more The attentional blink (AB) effect is the reduced ability of subjects to report a second target stimuli (T2) among a rapidly presented series of distractors, when it appears within a time window of about 200-500 ms after a first target (T1). We present a simple mathematical model explaining the AB as resulting from the temporal response dynamics of a stochastic, linear system with threshold, whose output quantifies the allocation of attentional resources to incoming sensory stimuli. The model postulates that attention capacity is modulated by activity of the default mode network (DMN), a correlated set of brain regions related to task irrelevant processing which is known to exhibit reduced activation following mental training such as mindfulness meditation (MM). The model suggests a parsimonious computational account for explaining and relating several key findings from the AB, DMN and MM research literature as well as providing a framework for generating novel predictions.
Uploads
Papers by Naftali Tishby