AI has gotten even faster and associated with that there are people that worry about AI, you know, fairness, bias, social economic displacement. There are also the further out speculative worries about AGI, evil sentient killer robots, but I think that there are real worries about harms, possible real harms today and possibly other harms in the future that people worry about.
It seems that the sort of AI risks most people worry about fall into one of a few categories:
The spread of misinformation will accelerate with deepfakes, fake news, etc. generated by malign humans using ever more convincing models.
🤪 Evil sentient robots will take over the world and kill us all Terminator-style. 😏
It seems that a fourth option is not really prominent in the public consciousness: namely that powerful AI systems could end up destroying everything of value by accident when enough optimization pressure is applied toward any goal, no matter how noble. No robots or weapons are even required to achieve this. This oversight is a real PR problem for the alignment community, but it’s unfortunately difficult to explain why this makes sense as a real threat to the average person.
And I think, you know, thinking that somehow we’re smart enough to build those systems to be super intelligent and not smart enough to design good objectives so that they behave properly, I think is a very, very strong assumption that is, it’s just not, it’s very, it’s very low probability.
It seems that the sort of AI risks most people worry about fall into one of a few categories:
AI/automation starts taking our jobs, amplifying economic inequalities.
The spread of misinformation will accelerate with deepfakes, fake news, etc. generated by malign humans using ever more convincing models.
🤪 Evil sentient robots will take over the world and kill us all Terminator-style. 😏
It seems that a fourth option is not really prominent in the public consciousness: namely that powerful AI systems could end up destroying everything of value by accident when enough optimization pressure is applied toward any goal, no matter how noble. No robots or weapons are even required to achieve this. This oversight is a real PR problem for the alignment community, but it’s unfortunately difficult to explain why this makes sense as a real threat to the average person.
So close.