I agree that there’s some simplification going on. Also, though, I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.). For example, you could imagine a mind where the Master Planner gets to ask for more resolution on various parts of the map, but doesn’t get to choose how the get-more-resolution-process works, or you can imagine various other minds in the space between human and full consequentialist gene-propagator.
In other words, there are wide chasms between “some process in the brain regulates how much processing power goes towards building various different parts of the map” and “the planner that is trying to achieve certain goals gets to decide how certain parts of the map are filled in.”
it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
They’re both part of the same territory, of course :-) I agree that social monkeys have to treat these things differently, especially in contexts where it’s easy to be killed for having unpopular beliefs in cache (even if they’re accurate), and again, I can see reasons why evolution took the low road instead of successfully building a full consequentialist. But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.).
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
They’re both part of the same territory, of course
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
I agree that there’s some simplification going on. Also, though, I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.). For example, you could imagine a mind where the Master Planner gets to ask for more resolution on various parts of the map, but doesn’t get to choose how the get-more-resolution-process works, or you can imagine various other minds in the space between human and full consequentialist gene-propagator.
In other words, there are wide chasms between “some process in the brain regulates how much processing power goes towards building various different parts of the map” and “the planner that is trying to achieve certain goals gets to decide how certain parts of the map are filled in.”
They’re both part of the same territory, of course :-) I agree that social monkeys have to treat these things differently, especially in contexts where it’s easy to be killed for having unpopular beliefs in cache (even if they’re accurate), and again, I can see reasons why evolution took the low road instead of successfully building a full consequentialist. But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.