No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Looking back I think I read more into your comments than was really there; I apologize.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
I agree here. The debate is over whether or not the current situation is normal.
However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.
No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
Can you help me understand?
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Looking back I think I read more into your comments than was really there; I apologize.
I agree here. The debate is over whether or not the current situation is normal.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.