2022 FoVea Travel and Networking Awardees

FoVea is pleased to announce the winners of our 2022 FoVea Travel and Networking Award.

The FoVea Travel and Networking Award was open to female members of the Vision Science Society (VSS) in pre-doctoral, post-doctoral, and pre-tenure faculty or research scientist positions and intended to cover costs involved in attending the VSS meeting, including membership fees, conference registration fees, and travel expenses.  

FoVea created this award as part of its mission to advance the visibility, impact, and success of women in vision science. A recent report from Cooper and Radonjić (2016) indicated that in 2015, the ratio of women to men in VSS was near equal at the pre-doctoral level (1:1.13), but decreased as career stage increased. The decline is symptomatic of forces that impede the professional development of female vision scientists. A key aspect of professional development is building a professional network to support scientific pursuits and to provide mentorship at critical junctions in one’s academic career. The FoVea Travel and Networking Award will help female vision scientists build their professional network by encouraging them to meet with at least one Networking Target at the VSS meeting to discuss their research and consider potential for collaboration.

6 awards were funded by an NSF grant and 1 award was funded by the Visual Cognition journal.


Avi Aizenman

Avi is an Alexander von Humboldt Research Fellow in Germany, currently working in the lab of Professor Karl Gegenfurtner at the University of Giessen. She is currently investigating the eye and head movements coordinated in virtual reality, as well as color categorization in virtual environments. Avi is broadly interested in understanding how gaze behavior explains our perception of the visual world. She received her PhD in Vision Science from UC Berkeley where she worked with Professors Dennis Levi and Marty Banks on projects related to understanding how oculomotor deficits impact vision in Amblyopia and quantifying differences in gaze behavior between natural and virtual environments. Prior to this, Avi worked as a research assistant in Dr. Jeremy Wolfe’s Visual Attention Lab at Harvard Medical School on projects investigating radiological search. She completed her Bachelor’s degree at Brandeis University working with Professor Robert Sekuler on understanding audiovisual integration and the role of musicianship.


Jasmine Awad

Jasmine Awad is a 4th year doctoral student in the Cognition & Perception Psychology program at the University of Washington. Her interests lie in leveraging her experience as a CODA (child of deaf adults) and expertise in visual perception to improve and promote accessibility and equity in learning and the development of technologies for differently abled populations. Under the mentorship of Dr. Ione Fine, her current research is designed to understand the performance gap seen in mathematical achievement between Deaf/Hard of hearing students and their hearing peers by exploring differences in cognitive processes (such as working memory & the visual spatial sketchpad) and potential visual attentional bottlenecks in Sign Language users during learning. 


Angelica Godinez

Angie is a postdoctoral researcher working in Martin Rolfs’ Active perception and Cognition lab at Humboldt-Universität zu Berlin and in the German Excellence cluster Science of Intelligence. Prior to her
postdoc, Angie received a BS in Psychology and MS in Human Factors and Ergonomics from San Jose State University. During this time, she worked at NASA Ames Research Center with Leland Stone and Dorion Liston. For her PhD at the University of California, Berkeley she worked with Dennis M. Levi on the impact, recovery and possible adaptations of poor binocular vision. Broadly speaking, she is interested in how we process 3D visual information and use that for planning and executing manual
motor movements.


Shui’Er Han

Shui’Er Han is a Postdoctoral Scholar in Prof. Duje Tadin’s lab at the University of Rochester, New York. A mentee of Professor David Alais, she completed her PhD at the University of Sydney in 2019. During her candidature, she studied the underlying mechanisms of continuous flash suppression (CFS), a potent form of dichoptic stimulation, using image processing and Fast Fourier Transform analyses as the mainstay approaches. In her first postdoc position with Prof. Frans Verstraten, she applied her experience with CFS in short-term monocular deprivation, investigating its spatial specificity in Virtual Reality (VR). Her current position is funded by an international fellowship from the Singapore’s A*STAR, and her work uses VR to study different perceptual phenomena (e.g., binocular rivalry) under naturalistic environments. She also employs VR to conduct perceptual training and to evaluate optical solutions (e.g., peripheral displays and vergence-accommodation conflict).


Simran Purokayastha

Simran Purokayastha is pursuing her PhD in Cognition and Perception at the Department of Psychology, New York University (NYU), under the advisorship of Prof. Marisa Carrasco. She holds a BA (Honors) in Psychology from Christ University (Bangalore, India), and an MSc in Human Cognitive Neuropsychology from the University of Edinburgh (Edinburgh, UK). She has previously studied conscious processing using binocular rivalry; the role of saccadic inhibition in voluntary saccade reorienting; and human gaze strategies in a custom-designed change blindness task. Her research immediately before NYU focused on identifying potential biomarkers of Alzheimer’s disease using multimodal techniques – including PET, (f)MRI, EEG and standard psychometric evaluations – as part of a large, longitudinal study that is still ongoing. In her PhD research, she uses psychophysics to study robust asymmetries around the visual field and whether inherent and trained mechanisms (fixational eye movements and attention) can ameliorate the effects of these asymmetries. Simran is also passionate about helping create a more diverse STEM workforce that is inclusive of underrepresented voices and works actively in her department to facilitate this vision.


Sophia Robert

Sophia Robert received her B.A. (2018) in Biology and Philosophy from Williams College, focusing on philosophical foundations of cognitive science and evolutionary biology. She came to love vision while at the NIH as a postbac thanks to Dr. Leslie G. Ungerleider, under whom she studied curvature and animacy processing in macaques and humans. Sophia is currently a second-year graduate student in the Psychology Department at Carnegie Mellon University, pursuing her PhD in Cognitive Neuroscience with the guidance of Drs. Marlene Behrmann and Michael J Tarr. She is generally interested in the interaction between lower-level visual features of objects and their mental and topographical representations in the visual cortices of neurotypical and atypical populations. Her current work utilizes a naturalistic viewing paradigm with fMRI to investigate differences in the functional organization of the brains of patients who have undergone extensive cortical resections in childhood to understand the principles guiding cortical (re)organization.


Stephanie Shields

Stephanie Shields is a fourth-year neuroscience PhD candidate at The University of Texas at Austin, co-advised by Drs. Lawrence Cormack and Alex Huk. She received a BS in Psychology from Roanoke College in 2017, where she worked with Dr. David Nichols, and she received a 2017-18 Fulbright Study/Research Award that allowed her to work with Dr. Lutz Wiegrebe at Ludwig-Maximilians-Universität München. Stephanie is interested in using psychophysics and computational modeling to study how sensory information is processed in neural circuits to support perception, particularly stereoscopic perception. Her dissertation research focuses on the impact of environment-to-retinae geometry on the encoding and perception of 3D orientation. She is an NSF Graduate Research Fellow and an incoming member of VSS’s Student-Postdoc Advisory Committee.


You can view past recipients here.