Curated. As previously noted, I’m quite glad to have this list of reasons written up. I like Robby’s comment here which notes:
The point is not ‘humanity needs to write a convincing-sounding essay for the thesis Safe AI Is Hard, so we can convince people’. The point is ‘humanity needs to actually have a full and detailed understanding of the problem so we can do the engineering work of solving it’.
I look forward to other alignment thinkers writing up either their explicit disagreements with this list, or things that the list misses, or their own frame on the situation if they think something is off about the framing of this list.
Curated. As previously noted, I’m quite glad to have this list of reasons written up. I like Robby’s comment here which notes:
I look forward to other alignment thinkers writing up either their explicit disagreements with this list, or things that the list misses, or their own frame on the situation if they think something is off about the framing of this list.