The little pockets of cognitive science that I’ve geeked out about—usually in the predictive processing camp—have featured researchers who are usually quite surprised by or are going to great lengths to double underline the importance of language and culture in our embodied / extended / enacted cognition.
A simple version of the story I have in my head is this: We have physical brains thanks to evolution, and then by being an embodied predictive perception/action loop out in the world, we started transforming our world into affordances for new perceptions and actions. Things took off when language became a thing—we can could transmit categories and affordances and all kinds of other highly abstract language things in a way that are really surprisingly efficient for brains and have really high leverage for agents out in the world.
So I tend towards viewing our intelligence as resting on both our biological hardware and on the cultural memplexes we’ve created and curated and make use of pretty naturally, rather than just on our physical hardware. My gut sense—which I’m up for updates on—is that for the more abstract cognitive stuff we do, a decently high percentage of the fuel is coming from the language+culture artifact we’ve collectively made and nurtured.
One of my thoughts here is (and leaning heavily on metaphor to point at an idea, rather than making a solid concrete claim): maybe that makes arguments about the efficiency of the human brain less relevant here?
If you can run the abstract cultural code on different hardware, then looking at the tradeoffs made could be really interesting—but I’m not sure what it tells you about scaling floors or ceilings. I’d be particularly interested in whether running that cultural code on a different substrate opens the doors to glitches that are hard to find or patch, or to other surprises.
The shoggoth meme that has been going around also feels like it applies. If an AI can run our cultural code, that is a good chunk of the way to effectively putting on a human face for a time. Maybe it actually has a human face, maybe it just wearing a mask. So far I haven’t seen arguments that tilt me away from thinking of it like a mask.
For me, it doesn’t seem to imply that LLMs are or will remain a kind of “child of human minds”. As far as I know, almost all we know is how well they can wear the mask. I don’t see how it follows that it would necessarily grow and evolve in the way that it thinks/behaves/does what it does in human-like ways if it was scaled up or if it was given enough agency to reach for more resources.
I guess this is my current interpretation of “alien mind space”. Maybe lots of really surprising things can run our cultural code—in the same way that people have ported the game Doom to all kinds of surprising substrates, that have weird overlaps and non-overlaps with the original hardware the game was run on.
Actually I think the shoggoth mask framing is somewhat correct, but it also applies to humans. We don’t have a single fixed personality, we are also mask-wearers.
The argument that shifts me the most away from thinking of it with the shoggoth-mask analogy is the implication that a mask has a single coherent actor behind it. But if you can avoid that mental failure mode I think the shoggoth-mask analogy is basically correct.
The little pockets of cognitive science that I’ve geeked out about—usually in the predictive processing camp—have featured researchers who are usually quite surprised by or are going to great lengths to double underline the importance of language and culture in our embodied / extended / enacted cognition.
A simple version of the story I have in my head is this: We have physical brains thanks to evolution, and then by being an embodied predictive perception/action loop out in the world, we started transforming our world into affordances for new perceptions and actions. Things took off when language became a thing—we can could transmit categories and affordances and all kinds of other highly abstract language things in a way that are really surprisingly efficient for brains and have really high leverage for agents out in the world.
So I tend towards viewing our intelligence as resting on both our biological hardware and on the cultural memplexes we’ve created and curated and make use of pretty naturally, rather than just on our physical hardware. My gut sense—which I’m up for updates on—is that for the more abstract cognitive stuff we do, a decently high percentage of the fuel is coming from the language+culture artifact we’ve collectively made and nurtured.
One of my thoughts here is (and leaning heavily on metaphor to point at an idea, rather than making a solid concrete claim): maybe that makes arguments about the efficiency of the human brain less relevant here?
If you can run the abstract cultural code on different hardware, then looking at the tradeoffs made could be really interesting—but I’m not sure what it tells you about scaling floors or ceilings. I’d be particularly interested in whether running that cultural code on a different substrate opens the doors to glitches that are hard to find or patch, or to other surprises.
The shoggoth meme that has been going around also feels like it applies. If an AI can run our cultural code, that is a good chunk of the way to effectively putting on a human face for a time. Maybe it actually has a human face, maybe it just wearing a mask. So far I haven’t seen arguments that tilt me away from thinking of it like a mask.
For me, it doesn’t seem to imply that LLMs are or will remain a kind of “child of human minds”. As far as I know, almost all we know is how well they can wear the mask. I don’t see how it follows that it would necessarily grow and evolve in the way that it thinks/behaves/does what it does in human-like ways if it was scaled up or if it was given enough agency to reach for more resources.
I guess this is my current interpretation of “alien mind space”. Maybe lots of really surprising things can run our cultural code—in the same way that people have ported the game Doom to all kinds of surprising substrates, that have weird overlaps and non-overlaps with the original hardware the game was run on.
Actually I think the shoggoth mask framing is somewhat correct, but it also applies to humans. We don’t have a single fixed personality, we are also mask-wearers.
The argument that shifts me the most away from thinking of it with the shoggoth-mask analogy is the implication that a mask has a single coherent actor behind it. But if you can avoid that mental failure mode I think the shoggoth-mask analogy is basically correct.