S-034
What do we talk with when we talk with chatGPT? Approximations from neuroscience to generative Artificial Intelligence models
Bruno Bianchi1,2,3, Diego Fernández Slezak1,2, Juan E. Kamienkowski1,2,3
  1. Laboratorio de Inteligencia Artificial Aplicada, ICC, CONICET-UBA
  2. Departamento de Computación, FCEyN-UBA
  3. Maestría de Explotación de Datos y Descubrimiento del Conocimiento, FCEyN-UBA
Presenting Author:
Bruno Bianchi
bbianchi@dc.uba.ar
The great advance of Large Language Models (LLMs) in recent years has transformed human interaction with Artificial Intelligence, integrating into our daily lives for a multitude of tasks. Despite their widespread use, understanding the underlying principles that govern their internal functioning and the emergence of complex behaviors remains a fundamental challenge. In this poster, we will present a set of research lines, which we carry out from the Applied Artificial Intelligence Laboratory, in which we explore the internal mechanisms of LLMs in different tasks (semantic disambiguation, personality changes, reaction to different political and stereotypical biases, among others) with a perspective from neuroscience and experimental psychology. The main objective of these lines is to improve the understanding of how these models process, represent, and generate language, seeking parallels with biological cognitive systems. These investigations not only contribute to unraveling the "brain" of AI but also offer fertile ground for generating hypotheses about neural computation in biological systems, opening new avenues for the study of cognition and language.