Really enjoyed reading this, it’s a refreshing approach to tackle the issue, giving practical examples of what risk scenarios would look like.
I initially saved this post to read thinking it would provide counterarguments to AI being an x-risk, which to some degree it did.
Pointing out that some of these mistakes that can lead to AI being an x-risk are “rather embarrassing” is really compelling, I wonder how likely (in percentages of confidence) you see those mistakes to be made. Because even though these mistakes might be really embarrassing, depending on the setting and who can make them as you mention in the post, they are more or less likely.
Really enjoyed reading this, it’s a refreshing approach to tackle the issue, giving practical examples of what risk scenarios would look like.
I initially saved this post to read thinking it would provide counterarguments to AI being an x-risk, which to some degree it did.
Pointing out that some of these mistakes that can lead to AI being an x-risk are “rather embarrassing” is really compelling, I wonder how likely (in percentages of confidence) you see those mistakes to be made. Because even though these mistakes might be really embarrassing, depending on the setting and who can make them as you mention in the post, they are more or less likely.