Why not treat this in Bayesian way? There is 50 per cent a priory credence in short time line, and 50 per cent on long one. In that case, we still need to get working AI safety solutions ASAP, even if there is 50 per cent chance that the money on AI safety will be just lost? (Disclaimer: I am not paid for any AI safety research, except Good AI prize of 1500 USD, which is not related to timelines.)
Well, it is not a “Bayesian way” to take a random controversial statement and say “the priors are 50% it’s true, and 50% it’s false”.
(That would be true only if you had zero knowledge about… anything related to the statement. Or if the knowledge would be so precisely balanced the sum of the evidence would be exactly zero.)
But the factual wrongness is only a partial answer. The other part is more difficult to articulate, but it’s something like… if someone uses “your keywords” to argue a complete nonsense, that kinda implies that you are expected to be so stupid that you would accept the nonsense as long as it is accompanied with the proper keywords… which is quite offensive.
Why not treat this in Bayesian way? There is 50 per cent a priory credence in short time line, and 50 per cent on long one. In that case, we still need to get working AI safety solutions ASAP, even if there is 50 per cent chance that the money on AI safety will be just lost? (Disclaimer: I am not paid for any AI safety research, except Good AI prize of 1500 USD, which is not related to timelines.)
why is the above comment so badly downvoted?
Well, it is not a “Bayesian way” to take a random controversial statement and say “the priors are 50% it’s true, and 50% it’s false”.
(That would be true only if you had zero knowledge about… anything related to the statement. Or if the knowledge would be so precisely balanced the sum of the evidence would be exactly zero.)
But the factual wrongness is only a partial answer. The other part is more difficult to articulate, but it’s something like… if someone uses “your keywords” to argue a complete nonsense, that kinda implies that you are expected to be so stupid that you would accept the nonsense as long as it is accompanied with the proper keywords… which is quite offensive.
Doesn’t engage with the post’s arguments.
I think that it’s wrong to assume that the prior on ‘short’ vs ‘long’ timelines should be 50⁄50.
I think that it’s wrong to just rely on a prior, when it seems like one could obtain relevant evidence.