Embodied Situated and Grounded Intelligence: Implications for AI
From Santa Fe Institute Events Wiki
Navigation |
Workshop Dates
April 12th - April 15th, 2022
Organizers
Melanie Mitchell (SFI) |
Melanie Moses (UNM; SFI) |
Tyler Millhouse (SFI) |
Description
This workshop is part of a series of meetings at SFI on the foundations of intelligence, a project sponsored by the National Science foundation. More information about the project can be found at http://intelligence.santefe.edu.
Most AI research tacitly assumes that general intelligence can be achieved without taking into account any particulars of an intelligent agent’s body or its physical, social, or cultural environment. However, many cognitive science researchers have argued that intelligence in humans and other animals is embodied, situated, and/or grounded in the physical, social, and cultural worlds that an agent inhabits, and that creating a "pure intelligence" separate from these worlds is not possible. This workshop will examine the arguments for and critiques of the view that cognition is fundamentally embodied, situated, and grounded. Through talks and discussions, we will explore questions such as (though not limited to) the following:
• What do terms such as “embodiment”, “situated intelligence” and “grounded cognition” mean from the perspectives of neuroscience, psychology, philosophy, and AI? And how do they differ from one another?
• To what degree are our concepts and language embodied, situated, and grounded? Could machine intelligence be developed without such embodiment and grounding?
• Can the perspectives of embodied and situated cognition yield insight for making sense of and probing the behavior of today’s large language models (such as GPT-3) and other modern AI systems?
• How can insights from cognitive science inform important questions in AI, such as how to assess a system’s “understanding,” and generalization abilities?
• How can these ideas help us to make progress on socially beneficial AI, and the often discussed (but ill-defined) notion of “AI alignment with human values”?