AI and the Barrier of Meaning 2: Difference between revisions
From Santa Fe Institute Events Wiki
(Blanked the page) Tag: Blanking |
No edit summary |
||
(17 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{|bord''Italic text''er="0" style="margin: 0px 0px 0px 10px; background: #f9f9f9; border: solid #aaa 1px;" | |||
{| align="right" border="0" style="margin: 0px 0px 0px 10px; background: #f9f9f9; border: solid #aaa 1px;" | |||
|'''Navigation''' | |||
*[[AI_and_the_Barrier_of_Meaning_2_ |Home]] | |||
*[[AI_and_the_Barrier_of_Meaning_2_-_Agenda|Agenda]] | |||
|} | |||
<table> | |||
<tr> | |||
<td> | |||
[[File:the_fence_1985.64.31.jpg|400px]]<br> | |||
</td> | |||
<td> | |||
<big>'''Workshop Dates'''</big> | |||
'''April 24-26, 2023'''<br> | |||
<big>'''Organizers'''</big> | |||
<p> [https://melaniemitchell.me/ Melanie Mitchell], SFI </p> | |||
<p> [https://moseslab.cs.unm.edu/ Melanie Moses], UNM; SFI </p> | |||
<p> [http://tylermillhouse.com Tyler Millhouse], University of Arizona, SFI</p> | |||
</td> | |||
</tr> | |||
</table> | |||
<big>'''Description'''</big> | |||
This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.<br> | |||
The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”? | |||
This workshop is funded by the National Science Foundation under Grant No. 2020103 ("Foundations of Intelligence in Natural and Artificial Systems") and by the Santa Fe Institute. |
Latest revision as of 17:53, 2 March 2023
Navigation |
Workshop Dates April 24-26, 2023 Organizers Melanie Mitchell, SFI Melanie Moses, UNM; SFI Tyler Millhouse, University of Arizona, SFI |
Description
This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.
The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”?
This workshop is funded by the National Science Foundation under Grant No. 2020103 ("Foundations of Intelligence in Natural and Artificial Systems") and by the Santa Fe Institute.