If we believe that global warming of exactly +2 C is going to happen within 100 years with 99.9% probability the most reasonable response is to do geoengineering to counteract those +2 C.
One of the primary reasons for choosing a different strategy is that there’s a lot of uncertainty involved.
If you grant a 20% chance that global warming isn’t happening that geoengineering project would have the potential to mess up a lot.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
Elon Musk says that he thought there was a 10% chance when he started SpaceX that it would become a successful company. If he would have followed your advice he wouldn’t have started that valuable company and the same is likely true for many other founders who have a realistic view of their success.
One of the core reasons why Eliezer wrote the sequences is to promote the idea that low probability high impact events matter a great deal and as a result, we should invest money into X-risk prevention.
So you cannot divide people up into people who care about what reality actually is and people who care about other things like society’s approval—everyone cares a bit about both
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
You did not understand the proposal. Let’s analyze a situation like that. Suppose there is a physics experiment which has a 99% chance to be safe, and a 1% chance to destroy humanity. The people in charge ask, “Should we accept it as a fact that the experiment will be safe?”
According to our previous stipulations, the utility of believing that it is safe will be 0.99, minus the disutility of believing that it is safe when it is not, so a total of 0.98, considering only the elements of truth and falsehood.
But presumably people care about not destroying humanity as well. Let’s say the utility of destroying humanity is negative 1,000,000. Then the total utility of treating it as a fact that the experiment will be safe will be 0.98 - (0.01 * 1,000,000), or in other words, −9,999.02. Very bad. So they will choose not to believe it. Nor does that mean that they will believe the opposite: this would a utility of −0.98, which is still negative. So they will choose to believe neither, and simply say, “It would probably be safe, but there would be too much risk of destroying humanity, so we will not do it.” This is presumably the result that you want, and it is also the result from following my proposal.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
This should be analyzed in the same way. You do not choose to say, “This will definitely not be accepted,” because you will not maximize your utility that way. Instead, you say, “This will probably not be accepted, but it might be.”
In other words, you seem to think that I was proposing a threshold where if something has a certain probability, you suddenly decide to accept it as a fact. There is no such threshold, and I did not propose one. Depending on the case, you will choose to treat something as a fact, when it will maximize your utility to do so. Thus for example when you look out the window and see rain, you say, “It is raining,” rather than “It is probably raining,” because the cost of adding the qualification in every instance is greater than the benefit of just being right about the rain, given the small risk of being wrong and the harmlessness of that in most cases.
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did. I am against the falsehood of asserting that you care only about truth. No truthseeker would claim such a thing; but a status seeker might claim it.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did.
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
I am against the falsehood of asserting that you care only about truth.
That feels to me like a strawman. Who made such an assertion?
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
If someone says that Jews are not human, he would deservedly be called a racist. That has nothing to do with attacking truthseeking norms, because the claim about Jews is utterly false. The same thing applies to the situation discussed.
That feels to me like a strawman. Who made such an assertion?
I did not know you were talking about the discussion of racism. I thought you were talking about the fact that I said that other terms in your utility function besides truth should affect what you do (including what you treat as a fact, since that is something that you do.) That seems to me a reasonable interpretation of what you said, given that your main criticism seemed to be about this.
If someone says that Jews are not human, he would deservedly be called a racist.
Communication always focuses on a subset of the available facts. You can make a choice to focus on influencing other people to believe certain things by appealing to rational argument.
Here you made the choice to influence other people by appealing to the social desirability of holding certain beliefs.
Making that choice damages truth-seeking norms.
Whether someone “deserve” something is also a moral judgement and not just a statement of objective facts.
I think it might be more obvious to someone that saying that Jews are not human deserves moral opprobrium, than that Jews are human. If you are not a moral realist, you might think this is impossible, but I am a moral realist, and I don’t see any reason why the moral statement might not be more obvious. In particular, I think it would be likely be true for many people in the case discussed. In that case, there is no reason not to bring it up in a discussion of this kind, since it is normal to lead people from what is more obvious to what is less obvious. And there is nothing against truthseeking norms in doing that.
I suspect that you will disagree, but your disagreement would be like a conservative economist saying “minimum wages are harmful, so if you propose minimum wages you are hurting people.” The person proposing minimum wages might in fact be hurting people, but this is definitely not what they are trying to do. And as I said originally, I was not attacking truthseeking or truthseeking norms in any way. (And I am not saying that I am wrong in fact in this way either—I am just saying that you should not be attacking my motives in that way.)
If we believe that global warming of exactly +2 C is going to happen within 100 years with 99.9% probability the most reasonable response is to do geoengineering to counteract those +2 C. One of the primary reasons for choosing a different strategy is that there’s a lot of uncertainty involved.
If you grant a 20% chance that global warming isn’t happening that geoengineering project would have the potential to mess up a lot.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
Elon Musk says that he thought there was a 10% chance when he started SpaceX that it would become a successful company. If he would have followed your advice he wouldn’t have started that valuable company and the same is likely true for many other founders who have a realistic view of their success.
One of the core reasons why Eliezer wrote the sequences is to promote the idea that low probability high impact events matter a great deal and as a result, we should invest money into X-risk prevention.
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
You did not understand the proposal. Let’s analyze a situation like that. Suppose there is a physics experiment which has a 99% chance to be safe, and a 1% chance to destroy humanity. The people in charge ask, “Should we accept it as a fact that the experiment will be safe?”
According to our previous stipulations, the utility of believing that it is safe will be 0.99, minus the disutility of believing that it is safe when it is not, so a total of 0.98, considering only the elements of truth and falsehood.
But presumably people care about not destroying humanity as well. Let’s say the utility of destroying humanity is negative 1,000,000. Then the total utility of treating it as a fact that the experiment will be safe will be 0.98 - (0.01 * 1,000,000), or in other words, −9,999.02. Very bad. So they will choose not to believe it. Nor does that mean that they will believe the opposite: this would a utility of −0.98, which is still negative. So they will choose to believe neither, and simply say, “It would probably be safe, but there would be too much risk of destroying humanity, so we will not do it.” This is presumably the result that you want, and it is also the result from following my proposal.
This should be analyzed in the same way. You do not choose to say, “This will definitely not be accepted,” because you will not maximize your utility that way. Instead, you say, “This will probably not be accepted, but it might be.”
In other words, you seem to think that I was proposing a threshold where if something has a certain probability, you suddenly decide to accept it as a fact. There is no such threshold, and I did not propose one. Depending on the case, you will choose to treat something as a fact, when it will maximize your utility to do so. Thus for example when you look out the window and see rain, you say, “It is raining,” rather than “It is probably raining,” because the cost of adding the qualification in every instance is greater than the benefit of just being right about the rain, given the small risk of being wrong and the harmlessness of that in most cases.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did. I am against the falsehood of asserting that you care only about truth. No truthseeker would claim such a thing; but a status seeker might claim it.
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
That feels to me like a strawman. Who made such an assertion?
If someone says that Jews are not human, he would deservedly be called a racist. That has nothing to do with attacking truthseeking norms, because the claim about Jews is utterly false. The same thing applies to the situation discussed.
I did not know you were talking about the discussion of racism. I thought you were talking about the fact that I said that other terms in your utility function besides truth should affect what you do (including what you treat as a fact, since that is something that you do.) That seems to me a reasonable interpretation of what you said, given that your main criticism seemed to be about this.
Communication always focuses on a subset of the available facts. You can make a choice to focus on influencing other people to believe certain things by appealing to rational argument. Here you made the choice to influence other people by appealing to the social desirability of holding certain beliefs.
Making that choice damages truth-seeking norms.
Whether someone “deserve” something is also a moral judgement and not just a statement of objective facts.
I think it might be more obvious to someone that saying that Jews are not human deserves moral opprobrium, than that Jews are human. If you are not a moral realist, you might think this is impossible, but I am a moral realist, and I don’t see any reason why the moral statement might not be more obvious. In particular, I think it would be likely be true for many people in the case discussed. In that case, there is no reason not to bring it up in a discussion of this kind, since it is normal to lead people from what is more obvious to what is less obvious. And there is nothing against truthseeking norms in doing that.
I suspect that you will disagree, but your disagreement would be like a conservative economist saying “minimum wages are harmful, so if you propose minimum wages you are hurting people.” The person proposing minimum wages might in fact be hurting people, but this is definitely not what they are trying to do. And as I said originally, I was not attacking truthseeking or truthseeking norms in any way. (And I am not saying that I am wrong in fact in this way either—I am just saying that you should not be attacking my motives in that way.)
My charge isn’t about motives but about effects.
Fine. I disagree with your assessment.