I remember a character in Asimov’s books saying something to the effect of
It took me 10 years to realize I had those powers of telepathy, and 10 more years to realize that other people don’t have them.
and that quote has really stuck with me, and keeps striking me as true about many mindthings (object-level beliefs, ontologies, ways-to-use-one’s-brain, etc).
For so many complicated problem (including technical problems), “what is the correct answer?” is not-as-difficult to figure out as “okay, now that I have the correct answer: how the hell do other people’s wrong answers mismatch mine? what is the inferential gap even made of? what is even their model of the problem? what the heck is going on inside other people’s minds???”
Answers to technical questions, once you have them, tend to be simple and compress easily with the rest of your ontology. But not models of other people’s minds. People’s minds are actually extremely large things that you fundamentally can’t fully model and so you’re often doomed to confusion about them. You’re forced to fill in the details with projection, and that’s often wrong because there’s so much more diversity in human minds than we imagine.
The most complex software engineering projects in the world are absurdly tiny in complexity compared to a random human mind.
People’s minds are actually extremely large things that you fundamentally can’t fully model
Is this “fundamentally” as in “because you, the reader, are also a bounded human, like them”? Or “fundamentally” as in (something more fundamental than that)?
The first one. Alice fundamentally can’t fully model Bob because Bob’s brain is as large as Alice’s, so she can’t fit it all inside her own brain without simply becoming Bob.
If timelines weren’t so short, brain-computer-based telepathy would unironically be a big help for alignment.
(If a group had the money/talent to “hedge” on longer timelines by allocating some resources to that… well, instead of a hivemind, they first need to run through the relatively-lower-hanging fruit. Actually, maybe they should work on delaying capabilities research, or funding more hardcore alignment themselves, or...)
I remember a character in Asimov’s books saying something to the effect of
and that quote has really stuck with me, and keeps striking me as true about many mindthings (object-level beliefs, ontologies, ways-to-use-one’s-brain, etc).
For so many complicated problem (including technical problems), “what is the correct answer?” is not-as-difficult to figure out as “okay, now that I have the correct answer: how the hell do other people’s wrong answers mismatch mine? what is the inferential gap even made of? what is even their model of the problem? what the heck is going on inside other people’s minds???”
Answers to technical questions, once you have them, tend to be simple and compress easily with the rest of your ontology. But not models of other people’s minds. People’s minds are actually extremely large things that you fundamentally can’t fully model and so you’re often doomed to confusion about them. You’re forced to fill in the details with projection, and that’s often wrong because there’s so much more diversity in human minds than we imagine.
The most complex software engineering projects in the world are absurdly tiny in complexity compared to a random human mind.
Somewhat related: What Universal Human Experiences Are You Missing Without Realizing It? (and its spinoff: Status-Regulating Emotions)
Is this “fundamentally” as in “because you, the reader, are also a bounded human, like them”? Or “fundamentally” as in (something more fundamental than that)?
The first one. Alice fundamentally can’t fully model Bob because Bob’s brain is as large as Alice’s, so she can’t fit it all inside her own brain without simply becoming Bob.
I relate to this quite a bit ;-;
If timelines weren’t so short, brain-computer-based telepathy would unironically be a big help for alignment.
(If a group had the money/talent to “hedge” on longer timelines by allocating some resources to that… well, instead of a hivemind, they first need to run through the relatively-lower-hanging fruit. Actually, maybe they should work on delaying capabilities research, or funding more hardcore alignment themselves, or...)
I should note that it’s not entirely known whether quining is applicable for minds.