AI and the Barrier of Meaning 2: Difference between revisions
From Santa Fe Institute Events Wiki
No edit summary |
No edit summary |
||
Line 17: | Line 17: | ||
<big>'''Organizers'''</big> | <big>'''Organizers'''</big> | ||
<p> [https://melaniemitchell.me/ Melanie Mitchell | <p> [https://melaniemitchell.me/ Melanie Mitchell], SFI </p> | ||
[https://moseslab.cs.unm.edu/ Melanie Moses | [https://moseslab.cs.unm.edu/ Melanie Moses], UNM; SFI | ||
[http://tylermillhouse.com Tyler Millhouse | [http://tylermillhouse.com Tyler Millhouse], University of Arizona, SFI | ||
|} | |} | ||
---- | ---- |
Revision as of 17:49, 2 March 2023
Navigation |
Workshop Dates
April 24-26, 2023
Organizers
Melanie Mitchell, SFI
Melanie Moses, UNM; SFI Tyler Millhouse, University of Arizona, SFI
Description
This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.
The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”?
This workshop is funded by the National Science Foundation under Grant No. 2020103 ("Foundations of Intelligence in Natural and Artificial Systems") and by the Santa Fe Institute.