Assuming this was the case, wouldn’t it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?
I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads
Ok sure but extra resources and attention is still better than none.
Yes, I do expect that if we don’t get wiped out that maybe we’ll get somewhat bigger “warning shots” that humanity may be likely to pay more attention to. I don’t know how much that actually moves the needle though.
Ok sure but extra resources and attention is still better than none.
This isn’t obvious to me, it might make things harder. Like how when Elon Musk read Superintelligence and started developing concerns about AI risk but the result was that he founded OpenAI and gave it a billion dollars to play with, regarding which I think you could make an argument that doing so accelerated timelines and reduced our chances of avoiding negative outcomes.
Assuming this was the case, wouldn’t it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?
Ok sure but extra resources and attention is still better than none.
Yes, I do expect that if we don’t get wiped out that maybe we’ll get somewhat bigger “warning shots” that humanity may be likely to pay more attention to. I don’t know how much that actually moves the needle though.
This isn’t obvious to me, it might make things harder. Like how when Elon Musk read Superintelligence and started developing concerns about AI risk but the result was that he founded OpenAI and gave it a billion dollars to play with, regarding which I think you could make an argument that doing so accelerated timelines and reduced our chances of avoiding negative outcomes.