RSS

Sym­bol Grounding

TagLast edit: Dec 30, 2024, 9:52 AM by Dakara

Symbol Grounding is a fundamental challenge in AI research that concerns the ability of machines to connect their symbolic representations to real-world referents and acquire meaningful understanding from their interactions with the environment. In other words, it deals with how machines can understand and represent the meaning of objects, concepts, and events in the world. Without the ability to ground symbolic representations in the real world, machines cannot acquire the rich and complex meanings necessary for intelligent behavior, such as language processing, image recognition, and decision-making.

Related Pages: Truth, Semantics, & Meaning, Philosophy of Language

Has the Sym­bol Ground­ing Prob­lem just gone away?

RussellThorMay 4, 2023, 7:46 AM
12 points
3 comments1 min readLW link

What does GPT-3 un­der­stand? Sym­bol ground­ing and Chi­nese rooms

Stuart_ArmstrongAug 3, 2021, 1:14 PM
40 points
15 comments12 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven ByrnesOct 2, 2019, 12:06 PM
62 points
23 comments11 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 7 - Why is truth use­ful?

Gordon Seidoh WorleyApr 30, 2023, 4:48 PM
10 points
3 comments10 min readLW link

Clas­si­cal sym­bol ground­ing and causal graphs

Stuart_ArmstrongOct 14, 2021, 6:04 PM
22 points
2 comments5 min readLW link

Syn­tax, se­man­tics, and sym­bol ground­ing, simplified

Stuart_ArmstrongNov 23, 2020, 4:12 PM
30 points
4 comments9 min readLW link

A test for sym­bol ground­ing meth­ods: true zero-sum games

Stuart_ArmstrongNov 26, 2019, 2:15 PM
22 points
2 comments2 min readLW link

DALL-E does sym­bol grounding

p.b.Jan 17, 2021, 9:20 PM
6 points
0 comments1 min readLW link

Thoughts on the frame prob­lem and moral sym­bol grounding

Stuart_ArmstrongMar 11, 2013, 4:18 PM
3 points
9 comments2 min readLW link

Con­nect­ing the good reg­u­la­tor the­o­rem with se­man­tics and sym­bol grounding

Stuart_ArmstrongMar 4, 2021, 2:35 PM
13 points
0 comments2 min readLW link

Early Thoughts on On­tol­ogy/​Ground­ing Problems

johnswentworthNov 14, 2020, 11:19 PM
32 points
5 comments5 min readLW link

[In­tro to brain-like-AGI safety] 13. Sym­bol ground­ing & hu­man so­cial instincts

Steven ByrnesApr 27, 2022, 1:30 PM
73 points
15 comments15 min readLW link

Teleose­man­tics!

abramdemskiFeb 23, 2023, 11:26 PM
82 points
27 comments6 min readLW link1 review

Con­cep­tual co­her­ence for con­crete cat­e­gories in hu­mans and LLMs

Bill BenzonDec 9, 2023, 11:49 PM
13 points
1 comment2 min readLW link

Causal rep­re­sen­ta­tion learn­ing as a tech­nique to pre­vent goal misgeneralization

PabloAMCJan 4, 2023, 12:07 AM
21 points
0 comments8 min readLW link

Miriam Ye­vick on why both sym­bols and net­works are nec­es­sary for ar­tifi­cial minds

Bill BenzonJun 6, 2022, 8:34 AM
1 point
0 comments4 min readLW link

“What the hell is a rep­re­sen­ta­tion, any­way?” | Clar­ify­ing AI in­ter­pretabil­ity with tools from philos­o­phy of cog­ni­tive sci­ence | Part 1: Ve­hi­cles vs. contents

IwanWilliamsJun 9, 2024, 2:19 PM
9 points
1 comment4 min readLW link

Boundary Con­di­tions: A Solu­tion to the Sym­bol Ground­ing Prob­lem, and a Warning

ISCApr 8, 2025, 6:42 AM
1 point
0 comments5 min readLW link

Towards build­ing blocks of ontologies

Feb 8, 2025, 4:03 PM
27 points
0 comments26 min readLW link

Rep­re­sen­ta­tional Tethers: Ty­ing AI La­tents To Hu­man Ones

Paul BricmanSep 16, 2022, 2:45 PM
30 points
0 comments16 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman LeventovMay 29, 2023, 11:08 AM
12 points
10 comments30 min readLW link

An LLM-based “ex­em­plary ac­tor”

Roman LeventovMay 29, 2023, 11:12 AM
16 points
0 comments12 min readLW link

[Linkpost] Large lan­guage mod­els con­verge to­ward hu­man-like con­cept organization

Bogdan Ionut CirsteaSep 2, 2023, 6:00 AM
22 points
1 comment1 min readLW link

Steven Har­nad: Sym­bol ground­ing and the struc­ture of dictionaries

Bill BenzonSep 2, 2023, 12:28 PM
5 points
3 comments2 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveresOct 6, 2022, 6:05 PM
−13 points
8 comments1 min readLW link