But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.).
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
They’re both part of the same territory, of course
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.