D-042
Mapping neural activity on natural tasks: hybrid search and driving
Joaquin Gonzalez1,2, Juan Kamienkowski1,3,4, Matias Ison5
  1. Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires – Consejo Nacional de Investigaciones Científicas y Técnicas), (C1428EGA) Buenos Aires, Argentina
  2. Departamento de Física (Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires), (C1428EGA) Buenos Aires, Argentina
  3. Departamento de Computación (Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires), (C1428EGA) Buenos Aires, Argentina
  4. Maestría de Explotación de Datos y Descubrimiento del Conocimiento (Universidad de Buenos Aires), (C1428EGA) Buenos Aires, Argentina
  5. School of Psychology, University of Nottingham, Nottingham NG7 2RD, United Kingdom
Presenting Author:
Joaquin Gonzalez
joaquin.gonzalez6693@gmail.com
In everyday life, the brain often performs complex tasks without our notice. Searching for several items at once (e.g. wallet, keys) known as hybrid search, requires retrieving multiple targets from memory throughout the visual search. This is also critical for using landmarks during navigation, a process that also involves dividing attention across multiple objects, for instance, while driving. Here, we combined magnetoencephalography (MEG) and eye tracking to investigate the oscillatory and evoked dynamics of brain activity that support hybrid search and driving. Twenty-one participants performed a free-viewing, memory-guided search task in complex scenes. Time-frequency analyses revealed distinct signatures during encoding, retention, and search. Posterior alpha power decreased with memory load during encoding/retention, reflecting increased perceptual and mnemonic demands. During search, frontoparietal beta activity scaled with load, suggesting increased cognitive control. Source reconstruction revealed an early visual evoked lambda response in V1, followed by a distributed P3m component mainly in the right inferior parietal lobe, distinguishing target from distractor fixations. A similar lambda response was observed in a second dataset, where nine participants performed a divided attention task in a driving simulator. Together, these results start to reveal how memory, attention, and visual processing dynamically interact during active vision.