Ah, praise be to ya. This is a damn good start. Ideally the post would give scenarios (imagined or abstracted from cases on Less Wrong or wherever) showing how people do this kind of just-so story introspection and the various places at which you can get a map/territory confusion, or flinch and avoid your beliefs’ real weak points, or just not think about a tricky problem for 5 minutes. But we can always do that in another more specialized post at some later point, like Alicorn did with luminosity.
(I tentatively think that people have this intuitive process for evaluating the expected benefits of questioning or thinking up (alternative) plausible causal chains connecting subgoals to goals they think are more justified, but because values sort of feel like they’re determined by the map it’s easy to think that the feeling of it being hard is an unresolvable property and not something that can be fixed by spending more time looking at the territory. I wildly speculate that the intuitive calculation people use involves a lot of looking for concrete expected benefits for exploring valuespace, which is something that makes Less Wrong cool: the existence of a tribe that can support you and help you think through things makes your brain think it’s an okay use of resources to ponder things like cryonics, existential risks, et cetera.)
Ah, praise be to ya. This is a damn good start. Ideally the post would give scenarios (imagined or abstracted from cases on Less Wrong or wherever) showing how people do this kind of just-so story introspection and the various places at which you can get a map/territory confusion, or flinch and avoid your beliefs’ real weak points, or just not think about a tricky problem for 5 minutes. But we can always do that in another more specialized post at some later point, like Alicorn did with luminosity.
(I tentatively think that people have this intuitive process for evaluating the expected benefits of questioning or thinking up (alternative) plausible causal chains connecting subgoals to goals they think are more justified, but because values sort of feel like they’re determined by the map it’s easy to think that the feeling of it being hard is an unresolvable property and not something that can be fixed by spending more time looking at the territory. I wildly speculate that the intuitive calculation people use involves a lot of looking for concrete expected benefits for exploring valuespace, which is something that makes Less Wrong cool: the existence of a tribe that can support you and help you think through things makes your brain think it’s an okay use of resources to ponder things like cryonics, existential risks, et cetera.)