He is claiming uncertainty about that, but in this particular thread he is discussing accountability in particular, and you attack the overall conclusion instead of focusing on the particular argument. To fight rationalization, you must resist the temptation to lump different considerations together, and consider each on their own merit, no matter what they argue for.
You must support a good argument, even if it’s used as an argument for destroying the world and torturing everyone for eternity, and you must oppose a bad argument for saving the future. That’s the price you pay for epistemic rationality.
multifoliaterose is claiming that SIAI/FHI have zero or negative expected value. I claim that his justification for this claim is very flimsy.
He is claiming uncertainty about that, but in this particular thread he is discussing accountability in particular, and you attack the overall conclusion instead of focusing on the particular argument. To fight rationalization, you must resist the temptation to lump different considerations together, and consider each on their own merit, no matter what they argue for.
You must support a good argument, even if it’s used as an argument for destroying the world and torturing everyone for eternity, and you must oppose a bad argument for saving the future. That’s the price you pay for epistemic rationality.