But Statement 1 doesn’t imply caution is always best. A plausibly friendly AI based on several dubious philosophical assumptions (50% chance of good outcome) is better than taking our time to get it right(99%) if someone else will make a paperclip maximizer in the mean time. We want to maximize the likelyhood of the first AI being good, which may mean releasing a sloppily made, potentially friendly AI in race conditions. (assuming the other side can’t be stopped)
That’s why I say in 2 that this holds all else equal. You’re right that there are competing concerns that may make philosophical conservatism untenable, and I view it as one of the goals of AI policy to make sure that it is by telling us about the race conditions that would make us unable to practice philosophical conservatism.
But Statement 1 doesn’t imply caution is always best. A plausibly friendly AI based on several dubious philosophical assumptions (50% chance of good outcome) is better than taking our time to get it right(99%) if someone else will make a paperclip maximizer in the mean time. We want to maximize the likelyhood of the first AI being good, which may mean releasing a sloppily made, potentially friendly AI in race conditions. (assuming the other side can’t be stopped)
That’s why I say in 2 that this holds all else equal. You’re right that there are competing concerns that may make philosophical conservatism untenable, and I view it as one of the goals of AI policy to make sure that it is by telling us about the race conditions that would make us unable to practice philosophical conservatism.