In Press
Harrison, W.J. & Bex, P.J. Integrating retinotopic features in spatiotopic coordinates. Journal of Neuroscience (In Press).Abstract
The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object's constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has been hotly debated (Melcher and Morrone, 2003; Melcher, 2005; Knapen et al., 2009; Cavanagh et al., 2010a; 2010b; Melcher, 2010; Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feed-forward models of feature integration (Pelli, 2008; Freeman and Simoncelli, 2011) that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify, and its link with trans-saccadic object perception, instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992; Rao and Ballard, 1999; Di Lollo et al., 2000; Moore and Armstrong, 2003; Stanford et al., 2010).
Harrison, W.J., Remington, R.W. & Mattingley, J.B. Visual crowding is anisotropic along the horizontal meridian during smooth pursuit. Journal of Vision 14, 1:21, 1-16 (2014). WebsiteAbstract

Humans make smooth pursuit eye movements to foveate moving objects of interest. It is known that smooth pursuit alters visual processing, but there is currently no consensus on whether changes in vision are contingent on the direction the eyes are moving. We recently showed that visual crowding can be used as a sensitive measure of changes in visual processing, resulting from involvement of the saccadic eye movement system. The present paper extends these results by examining the effect of smooth pursuit eye movements on the spatial extent of visual crowding—the area over which visual stimuli are integrated. We found systematic changes in crowding that depended on the direction of pursuit and the distance of stimuli from the pursuit target. Relative to when no eye movement was made, the spatial extent of crowding increased for objects located contraversive to the direction of pursuit at an eccentricity of approximately 3°. By contrast, crowding for objects located ipsiversive to the direction of pursuit remained unchanged. There was no change in crowding during smooth pursuit for objects located approximately 7° from the fovea. The increased size of the crowding zone for the contraversive direction may be related to the distance that the fovea lags behind the pursuit target during smooth eye movements. Overall, our results reveal that visual perception is altered dynamically according to the intended destination of oculomotor commands.

Influences of Voluntary Eye Movements on Object Perception In Peripheral Vision
Harrison, W.J. Influences of Voluntary Eye Movements on Object Perception In Peripheral Vision. The University of Queensland (2013).Abstract
A key question in visual neuroscience is how our subjective experience of the visual world remains largely uninterrupted by the many eye movements we make, each of which abruptly displaces the retinal image. It has long been hypothesized that extra-retinal signals, generated by voluntary movements of the eye, alter our vision in such a way that facilitates visual perception across eye movements. However, it is unclear how these extra-retinal signals influence the integration of form information, giving rise to our ability to identify visual objects in spite of the fact that eye movements shift those objects on the retina. Thus, I used eye tracking and psychophysics to test the influence of eye movements on object perception in peripheral vision. In this thesis I present studies in which I show for the first time substantial changes in object identification that result from eye movement preparation and execution. The results from these studies show that our visual experience is not just constructed from the different images that hit the retina, but is influenced by eye movement signals that facilitate the perception of objects from one glance to the next.
Harrison, W.J., Mattingley, J.B. & Remington, R.W. Releasing crowding prior to a saccade requires more than “attention”: response to van Koningsbruggen and Buonocore. The Journal of Neuroscience 33, 28, (2013). Publisher's VersionAbstract
We thank van Koningsbruggen and Buonocore (2013) for their interest in our recent study, in which we investigated the relationship between eye movements and visual crowding, the phenomenon whereby a target object in peripheral vision is made difficult to recognize when closely flanked by distractor objects (Pelli and Tillman, 2008). We found that, just prior to a saccadic eye movement, the deleterious effects of crowding are reliably diminished at the saccade goal (Harrison et al., 2013a). As plausible explanations for such a pre-saccadic release from crowding, we discussed changes in the gain of visual neurons (e.g. Moore and Armstrong, 2003), a reduction in probabilistic positional uncertainty (Greenwood et al., 2009; van den Berg et al., 2012), and receptive-field shifts (Tolias et al., 2001) brought about by the known neurophysiological links between oculomotor and visual areas (for reviews, see Schall, 2002; Moore et al., 2003). Van Koningsbruggen and Buonocore have proposed two interesting alternative explanations for our results, and we respond to these here.
Harrison, W.J., Mattingley, J.B. & Remington, R.W. Eye movement targets are released from visual crowding. Journal of Neuroscience 33, 2927–2933 (2013). WebsiteAbstract
Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008; Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identify those targets just before the eyes began to move. Beginning ∼50 ms before a saccade toward a crowded object, we found that not only was there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments.
Harrison, W.J., Retell, J.D., Remington, R.W. & Mattingley, J.B. Visual crowding at a distance during predictive remapping. Current Biology 23, 1–6 (2013). Publisher's VersionAbstract
When we move our eyes, images of objects are displaced on the retina, yet the visual world appears stable. Oculomotor activity just prior to an eye movement contributes to percep- tual stability by providing information about the predicted location of a relevant object on the retina following a saccade [1, 2]. It remains unclear, however, whether an object's features are represented at the remapped location. Here, we exploited the phenomenon of visual crowding [3] to show that presaccadic remapping preserves the elemen- tary features of objects at their predicted postsaccadic loca- tions. Observers executed an eye movement and identified a letter probe flashed just before the saccade. Flanking stimuli were flashed around the location that would be occupied by the probe immediately following the saccade. Despite being positioned in the opposite visual field to the probe, these flankers disrupted observers' ability to identify the probe. Crucially, this ``remapped crowding'' interference was stron- ger when the flankers were visually similar to the probe than when the flanker and probe stimuli were distinct. Our findings suggest that visual processing at remapped loca- tions is featurally dependent, providing a mechanism for achieving perceptual continuity of objects across saccades.
Harrison, W.J., Mattingley, J.B. & Remington, R.W. Pre-saccadic shifts of visual attention. PLoS ONE 7, e45670 (2012). Publisher's VersionAbstract
The locations of visual objects to which we attend are initially mapped in a retinotopic frame of reference. Because each saccade results in a shift of images on the retina, however, the retinotopic mapping of spatial attention must be updated around the time of each eye movement. Mathôt and Theeuwes [1] recently demonstrated that a visual cue draws attention not only to the cue's current retinotopic location, but also to a location shifted in the direction of the saccade, the "future-field". Here we asked whether retinotopic and future-field locations have special status, or whether cue-related attention benefits exist between these locations. We measured responses to targets that appeared either at the retinotopic or future-field location of a brief, non-predictive visual cue, or at various intermediate locations between them. Attentional cues facilitated performance at both the retinotopic and future-field locations for cued relative to uncued targets, as expected. Critically, this cueing effect also occurred at intermediate locations. Our results, and those reported previously [1], imply a systematic bias of attention in the direction of the saccade, independent of any predictive remapping of attention that compensates for retinal displacements of objects across saccades [2].
Harrison, W.J., Thompson, M.B. & Sanderson, P.M. Multisensory integration with a head-mounted display: background visual motion and sound motion. Human factors 52, 78–91 (2010). Publisher's VersionAbstract
{OBJECTIVE:The aim of this study was to assess how background visual motion and the relative movement of sound affect a head-mounted display (HMD) wearer's performance at a task requiring integration of auditory and visual information. BACKGROUND:HMD users are often mobile. A commercially available speaker in a fixed location delivers auditory information affordably to the HMD user. However, previous research has shown that mobile HMD users perform poorly at tasks that require integration of visual and auditory information when sound comes from a free-field speaker. The specific cause of the poor task performance is unknown. METHOD:Participants counted audiovisual events that required integration of sounds delivered via a free-field speaker and vision on an HMD. Participants completed the task while either walking around a room, sitting in the room, or sitting inside a mobile room that allowed separate manipulation of background visual motion and speaker motion. RESULTS:Participants' accuracy at counting target audiovisual events was worse when participants were walking than when sitting at a desk