I agree with you completely and think this is very important to emphasize.
I also think the law of equal and opposite advice applies. Most people act too quickly without thinking. EAs tend towards the opposite, where it’s always “more research is needed”. This can also lead to bad outcomes if the results of the status quo are bad.
I can’t find it, but recently there was a post about the EU policy on AI and the author said something along the lines of “We often want to wait to advise policy until we know what would be good advice. Unfortunately, the choice isn’t give suboptimal advice now or great advice in 10 years. It’s give suboptimal advice now or never giving advice at all and politicians doing something much worse probably. Because the world is moving, and it won’t wait for EAs to figure it all out.”
I think this all largely depends on what you think the outcome is if you don’t act. If you think that if EAs do nothing, the default outcome is positive, you should err on extreme caution. If you think that the default is bad, you should be more willing to act, because an informed, altruistic actor increases the value of the outcome in expectation, all else being equal.
I agree with you completely and think this is very important to emphasize.
I also think the law of equal and opposite advice applies. Most people act too quickly without thinking. EAs tend towards the opposite, where it’s always “more research is needed”. This can also lead to bad outcomes if the results of the status quo are bad.
I can’t find it, but recently there was a post about the EU policy on AI and the author said something along the lines of “We often want to wait to advise policy until we know what would be good advice. Unfortunately, the choice isn’t give suboptimal advice now or great advice in 10 years. It’s give suboptimal advice now or never giving advice at all and politicians doing something much worse probably. Because the world is moving, and it won’t wait for EAs to figure it all out.”
I think this all largely depends on what you think the outcome is if you don’t act. If you think that if EAs do nothing, the default outcome is positive, you should err on extreme caution. If you think that the default is bad, you should be more willing to act, because an informed, altruistic actor increases the value of the outcome in expectation, all else being equal.