But from the perspective of a mind designer, it’s bonkers. The world-model-generator isn’t hooked up directly to reality!
I can’t help but feel that this is a “lies to children” approach; it makes great sense to me why the module that determines other people’s intentions takes input from conscious control. If nothing else, it allows the brain to throw highly variable amounts of resources at the problem—if Master Control thinks it’s trivial, the problem can be dropped, but if it thinks it’s worthwhile, the brain can spend hours worrying about the problem and calling up memories and forming plans for how to acquire additional information. (Who to ask for advice is itself a political play, and should be handed over to the political modules for contemplation.)
That is, it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
The valuable rationality lesson seems to be acknowledging that social reality, while it may seem the same sort of real as physical reality, is a different kind. I’m reading you, though, as claiming that they’re treated differently, when they should be treated the same way.
I agree that there’s some simplification going on. Also, though, I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.). For example, you could imagine a mind where the Master Planner gets to ask for more resolution on various parts of the map, but doesn’t get to choose how the get-more-resolution-process works, or you can imagine various other minds in the space between human and full consequentialist gene-propagator.
In other words, there are wide chasms between “some process in the brain regulates how much processing power goes towards building various different parts of the map” and “the planner that is trying to achieve certain goals gets to decide how certain parts of the map are filled in.”
it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
They’re both part of the same territory, of course :-) I agree that social monkeys have to treat these things differently, especially in contexts where it’s easy to be killed for having unpopular beliefs in cache (even if they’re accurate), and again, I can see reasons why evolution took the low road instead of successfully building a full consequentialist. But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.).
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
They’re both part of the same territory, of course
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
The thing is, some people are really adept at perceiving (and manipulating) “social reality”, as you put it. (Think politicians and salesmen, to name but a few.) Furthermore, this perception of “social reality” appears to occur in large part through “intuition”; things like body language, tone of voice, etc. all play a role, and these things are more or less evaluated unconsciously. It’s not just the really adept people that do this, either; all neurotypical people perform this sort of unconscious evaluation to some extent. In that respect, at least, the way we perceive “social reality” is remarkably similar to the way we perceive “physical reality”. That makes sense, too; the important tasks (from an evolutionary perspective) need to be automated in your brain, but the less important ones (like doing math, for example) require conscious control. So in my opinion, reading social cues would be an example of (in So8res’ terminology) “leaving the world-model-generator hooked up to (social) reality”.
However, we do in fact have a control group (or would that be experimental group?) for what happens when you attach the “world-model generator” to conscious thought: people with Asperger’s Syndrome, for instance, are far less capable of picking up social cues and reading the general flow of the situation. (Writing this as someone who has Asperger’s Syndrome, I should note that I’m speaking largely from personal experience here.) For them, the art of reading social situations needs to be learned pretty much from scratch, all at the level of conscious introspection. They don’t have the benefit of automated, unconscious social evaluation software that just activates; instead, every decision has to be “calculated”, so to speak. You’ll note that the results are quite telling: people with Asperger’s do significantly worse in day-to-day social interactions than neurotypical people, even after they’ve been “learning” how to navigate social interactions for quite some time.
In short, manual control is hard to wield, and we should be wary of letting our models be influenced by it. (There’s also all the biases that humans suffer from that make it even more difficult to build accurate world-models.) Unfortunately, there’s no real way to switch everything to “unconscious mode”, so instead, we should strive to be rational so we can build the best models we can with our available information. That, I think, is So8res’ point in this post. (If I’m mistaken, he should feel free to correct me.)
In that respect, at least, the way we perceive “social reality” is remarkably similar to the way we perceive “physical reality”.
I agree that a neurotypical sees social cues on the perceptual level in much the same way as they recognize some photons as coming from “the sky” on the perceptual level. I think my complaint is that the question of “is my coworker complimenting or insulting me?” is operating on a higher level of abstraction, and has a strategic and tactical component. Even if your coworker has cued their statement as a compliment, that may in fact be evidence for it being an insult—and in order to determine that, you need a detailed model of your coworker and possibly conscious deliberation. Even if your coworker has genuinely intended a compliment, you may be better served by perceiving it as an insult.
To give a somewhat benign example, if the coworker cued something ambiguously positive and you infer that they wanted to compliment you, you might want to communicate to them that they would be better off cuing something unambiguously positive if they want to be perceived as complimenting others. (Less benign examples of deliberate misinterpretation probably suggest themselves.)
I can’t help but feel that this is a “lies to children” approach; it makes great sense to me why the module that determines other people’s intentions takes input from conscious control. If nothing else, it allows the brain to throw highly variable amounts of resources at the problem—if Master Control thinks it’s trivial, the problem can be dropped, but if it thinks it’s worthwhile, the brain can spend hours worrying about the problem and calling up memories and forming plans for how to acquire additional information. (Who to ask for advice is itself a political play, and should be handed over to the political modules for contemplation.)
That is, it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
The valuable rationality lesson seems to be acknowledging that social reality, while it may seem the same sort of real as physical reality, is a different kind. I’m reading you, though, as claiming that they’re treated differently, when they should be treated the same way.
I agree that there’s some simplification going on. Also, though, I think your objection is perhaps a bit status quo blinded—there are many better ways to come up with a brain design that can throw “highly variable amounts of resources” at problems depending on how important they are, which doesn’t have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.). For example, you could imagine a mind where the Master Planner gets to ask for more resolution on various parts of the map, but doesn’t get to choose how the get-more-resolution-process works, or you can imagine various other minds in the space between human and full consequentialist gene-propagator.
In other words, there are wide chasms between “some process in the brain regulates how much processing power goes towards building various different parts of the map” and “the planner that is trying to achieve certain goals gets to decide how certain parts of the map are filled in.”
They’re both part of the same territory, of course :-) I agree that social monkeys have to treat these things differently, especially in contexts where it’s easy to be killed for having unpopular beliefs in cache (even if they’re accurate), and again, I can see reasons why evolution took the low road instead of successfully building a full consequentialist. But that doesn’t make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I don’t want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
Yes, but. We’re really discussing maps, rather than territory, because we’re talking about things like “skies” and “compliments,” which while they could be learned from the territory are not atomic elements of the territory. I’d say they exist at higher ‘levels of abstraction,’ but I’m not sure that clarifies anything.
Thanks! I agree that this isn’t the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies—they’re my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong “why epistemics?” bent :-)
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
The thing is, some people are really adept at perceiving (and manipulating) “social reality”, as you put it. (Think politicians and salesmen, to name but a few.) Furthermore, this perception of “social reality” appears to occur in large part through “intuition”; things like body language, tone of voice, etc. all play a role, and these things are more or less evaluated unconsciously. It’s not just the really adept people that do this, either; all neurotypical people perform this sort of unconscious evaluation to some extent. In that respect, at least, the way we perceive “social reality” is remarkably similar to the way we perceive “physical reality”. That makes sense, too; the important tasks (from an evolutionary perspective) need to be automated in your brain, but the less important ones (like doing math, for example) require conscious control. So in my opinion, reading social cues would be an example of (in So8res’ terminology) “leaving the world-model-generator hooked up to (social) reality”.
However, we do in fact have a control group (or would that be experimental group?) for what happens when you attach the “world-model generator” to conscious thought: people with Asperger’s Syndrome, for instance, are far less capable of picking up social cues and reading the general flow of the situation. (Writing this as someone who has Asperger’s Syndrome, I should note that I’m speaking largely from personal experience here.) For them, the art of reading social situations needs to be learned pretty much from scratch, all at the level of conscious introspection. They don’t have the benefit of automated, unconscious social evaluation software that just activates; instead, every decision has to be “calculated”, so to speak. You’ll note that the results are quite telling: people with Asperger’s do significantly worse in day-to-day social interactions than neurotypical people, even after they’ve been “learning” how to navigate social interactions for quite some time.
In short, manual control is hard to wield, and we should be wary of letting our models be influenced by it. (There’s also all the biases that humans suffer from that make it even more difficult to build accurate world-models.) Unfortunately, there’s no real way to switch everything to “unconscious mode”, so instead, we should strive to be rational so we can build the best models we can with our available information. That, I think, is So8res’ point in this post. (If I’m mistaken, he should feel free to correct me.)
I agree that a neurotypical sees social cues on the perceptual level in much the same way as they recognize some photons as coming from “the sky” on the perceptual level. I think my complaint is that the question of “is my coworker complimenting or insulting me?” is operating on a higher level of abstraction, and has a strategic and tactical component. Even if your coworker has cued their statement as a compliment, that may in fact be evidence for it being an insult—and in order to determine that, you need a detailed model of your coworker and possibly conscious deliberation. Even if your coworker has genuinely intended a compliment, you may be better served by perceiving it as an insult.
To give a somewhat benign example, if the coworker cued something ambiguously positive and you infer that they wanted to compliment you, you might want to communicate to them that they would be better off cuing something unambiguously positive if they want to be perceived as complimenting others. (Less benign examples of deliberate misinterpretation probably suggest themselves.)