Papers by Alexander Grunewald
... Temporal dynamics of visual processing. Authors: Alexander Grunewald, Major Professors: Steph... more ... Temporal dynamics of visual processing. Authors: Alexander Grunewald, Major Professors: Stephen Grossberg, Publication: · Doctoral Dissertation, ... An abstract is not available. top of page AUTHORS. Alexander Grunewald No contact information provided yet. ...
Neural Networks, Mar 1, 2002
A neuralrnodel ic; developed to probe how eortieogeniculate feedback may contribute to the dynami... more A neuralrnodel ic; developed to probe how eortieogeniculate feedback may contribute to the dynamics of binocular vision. Feedforward and feedback intera.etionfi among retinal, lateral geniculate, and eortical simple and complex eclls a.re used to simulate psychophysical a.ncl neurobiological data concerning the dynamics of binocular disparity processing, including eorrect registration of di;;parity in response to dynamically changing stimuli, binocular summation of weak stimuli, and fusion of anticorrelated stimuli when they are delayed, but not when they arc simultaneous. The model exploits dynamic rebounds between opponent ON and OFF cells that are due to imbalances in ha.bituative transrnittcr gates. It shows how corticogenieula.te feedback can carry out a top-down matching process that inhibits incorrect disparity re8ponse8 and reduces persistence of previously correet. responses to clynamica.lly changing displays.
Perception, Aug 1, 1996
A recent model of motion perception suggests that the motion aftereffect (MAE) is due to an inter... more A recent model of motion perception suggests that the motion aftereffect (MAE) is due to an interaction across all directions, rather than just opposite directions (Grunewald, 1995 Perception24 Supplement, 111). According to the model, the MAE is caused by the interaction of broadly tuned inhibition and narrowly tuned excitation, both in direction space. The model correctly suggests that, after adaptation to opposite directions of motion, no MAE results. Unlike other accounts of the MAE, this model predicts that, after adaptation to opposite but broadly defined directions of motion, a MAE orthogonal to the inducing motions is observed. We tested this counter-intuitive prediction by adapting subjects to two populations of dots, whose average motion vectors were opposite, but which contained motion vectors deviating slightly (up to 30°) from the average direction. During the subsequent test phase, randomly moving dots were displayed. Subjects were asked to indicate whether they perceived any global motion during this phase, and if so, they were asked to indicate the perceived motion axis by aligning a line. Subjects were tested on four pairs of directions: vertical, horizontal, and the two diagonals. In all four conditions subjects reported seeing an MAE, and the axis that they indicated was always orthogonal to the inducing motions (ANOVA: p<0.001, accounted for 95% of variance). This experiment confirms the predictions made by the model, thus further supporting the interaction across all directions of narrowly tuned excitation and broadly tuned inhibition.
Journal of Neurophysiology, Jul 1, 1999
Investigative Ophthalmology & Visual Science, 1997
Purpose. Recent modeling and psychophysical research has shown that adaptation to simultaneously ... more Purpose. Recent modeling and psychophysical research has shown that adaptation to simultaneously presented opposite directions of motion causes a motion aftereffect (MAE) along the orthogonal axis (Nature, 384: 358-360). The observers' subjective report indicated that the orthogonal MAE may be equivalent to low signal-to-noise bidirectional motion. We tested this in an experiment. Methods. The experiment had two phases: adaptation and test. The adaptation conditions were: 1) opposite motion, 2) unbiased noise. During the test phase one of four random dot displays was used: a) no bias, b) 10% bias up, c) 10% bias down, d) 10%bias up and 10% bias down. Subjects indicated whether they saw global motion up, down, or along both directions during the test phase. Results. Subjects perceived the biased test conditions (b-d) veridically. In the unbiased test condition (a) subjects reported seeing motion along both directions following opposite motion adaptation (1). In particular, condit...
How does the brain group together different parts of an object into a coherent visual object repr... more How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly ...
Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research Projects Ag... more Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research Projects Agency (ONR N00014-92-J-4015); National Science Foundation (IRI-90-24877); Office of Naval Research
A model of human motion perception is presented. The model contains two stages of direction selec... more A model of human motion perception is presented. The model contains two stages of direction selective units. The first stage contains broadly tuned units, while the second stage contains units that are narrowly tuned. The model accounts for the motion aftereffect through adapting units at the first stage and inhibitory interactions at the second stage. The model explains how two populations of dots moving in slightly different directions are perceived as a single population moving in the direction of the vector sum, and how two populations moving in strongly different directions are perceived as transparent motion. The model also explains why the motion aftereffect in both cases appears as non-transparent motion.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience
ABSTRACT
... A model of transparent motion and non-transparent motion aftereffects. GRUNEWALD A. 被引用文献: 3件... more ... A model of transparent motion and non-transparent motion aftereffects. GRUNEWALD A. 被引用文献: 3件. 収録刊行物. Advance in neural information processing systems Advance in neural information processing systems 8, 837-843, 1996. MIT press. 被引用文献: 3件. ...
Motion repulsion is the illusory enlargement of the angle between objects moving in two different... more Motion repulsion is the illusory enlargement of the angle between objects moving in two different directions of motion. Previous work suggests that motion repulsion occurs under dichoptic conditions, and therefore is binocular. In reference repulsion the direction of motion is misperceived even if only a single direction of motion is presented. In an experiment I show that repulsion under dichoptic conditions is correlated with reference repulsion, but not with binocular motion repulsion. This suggests that motion repulsion proper, which occurs over and beyond reference repulsion, does not occur under dichoptic conditions, implying that motion repulsion is monocular. 2004 Published by Elsevier Ltd.
Neural Networks the Official Journal of the International Neural Network Society, Mar 1, 2002
A neural model is developed to probe how corticogeniculate feedback may contribute to the dynamic... more A neural model is developed to probe how corticogeniculate feedback may contribute to the dynamics of binocular vision. Feedforward and feedback interactions among retinal, lateral geniculate, and cortical simple and complex cells are used to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing, including correct registration of disparity in response to dynamically changing stimuli, binocular summation of weak stimuli, and fusion of anticorrelated stimuli when they are delayed, but not when they are simultaneous. The model exploits dynamic rebounds between opponent ON and OFF cells that are due to imbalances in habituative transmitter gates. It shows how corticogeniculate feedback can carry out a top-down matching process that inhibits incorrect disparity responses and reduces persistence of previously correct responses to dynamically changing displays.
Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10, Jul 9, 1998
A model of motion detection is presented. The model contains three stages. The rst stage is unori... more A model of motion detection is presented. The model contains three stages. The rst stage is unoriented and is selective for contrast polarities. The next two stages work in parallel. A phase insensitive stage pools across di erent contrast polarities through a spatiotemporal lter and thus can detect rst and second order motion. A phase sensitive stage keeps contrast polarities separate, each of which is ltered through a spatiotemporal lter, and thus only rst order motion can be detected. Di erential phase sensitivity can therefore account for the detection of rst and second order motion. Phase insensitive detectors correspond to cortical complex cells, and phase sensitive detectors to simple cells.
How does the brain group together di erent parts of an object into a coherent visual object repre... more How does the brain group together di erent parts of an object into a coherent visual object representation? Di erent parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive elds. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts which belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent di erent stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.
Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Sp... more Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data Alexander Grunewald Department of Cognitive and Neural Systems Boston University Boston, MA 02215 alex@ cns. bu. edu Stephen Grossberg Department ...
Journal of managed care pharmacy : JMCP, 2011
Uploads
Papers by Alexander Grunewald