For example, I don’t think that “a terrorist infiltrates the team of labellers who are being used to train the AGI and poisons the data” is a very likely AI doom scenario. But I think there are probably 100 scenarios as plausible as that one, each of which sounds kind of bad.
There are even much more likely scenarios which have the same basic mechanism and effect, such as “a disgruntled employee poisons the data”, “nation state operation”, “criminal group”, “software bug”, “one intern making an error”, or even “internet trolls for the lulz”. All of these have actually happened to corrupt data for important software projects in subtle and destructive ways.
There are even much more likely scenarios which have the same basic mechanism and effect, such as “a disgruntled employee poisons the data”, “nation state operation”, “criminal group”, “software bug”, “one intern making an error”, or even “internet trolls for the lulz”. All of these have actually happened to corrupt data for important software projects in subtle and destructive ways.