If I don’t have strong evidence either way on a question I should move my estimates close to 50%...
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.