AI and the Barrier of Meaning 2: Difference between revisions
From Santa Fe Institute Events Wiki
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
'''April 24-26, 2023'''<br> | '''April 24-26, 2023'''<br> | ||
<big>'''Organizers'''</big> | |||
{| style="float:left; margin-left:0.2em; border:1px solid #BBB" | |||
|- style="font-size:87%" | |||
| valign="top" |[[Image:MM2021.jpg<!-- | |||
-->|156px]]<br> [https://melaniemitchell.me/ Melanie Mitchell<br>(SFI)] | |||
|} | |||
{| style="float:left; margin-left:0.2em; border:1px solid #BBB" | |||
|- style="font-size:87%" | |||
| valign="top" |[[Image:MelanieMoses.jpg<!-- | |||
-->|156px]]<br> [https://moseslab.cs.unm.edu/ Melanie Moses<br>(UNM; SFI)] | |||
|} | |||
{| style="float:left; margin-left:0.2em; border:1px solid #BBB" | |||
|- style="font-size:87%" | |||
| valign="top" |[[Image:TylerMillhouse.jpg<!-- | |||
-->|156px]]<br> [http://tylermillhouse.com Tyler Millhouse<br>(SFI)] | |||
|} | |||
---- | |||
<big>'''Description'''</big> | <big>'''Description'''</big> | ||
This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.<br> | This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.<br> | ||
The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”? | The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”? |
Revision as of 23:57, 24 February 2023
Navigation |
Workshop Dates
April 24-26, 2023
Organizers
Melanie Mitchell (SFI) |
Melanie Moses (UNM; SFI) |
Tyler Millhouse (SFI) |
Description
This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.
The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”?