It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.
It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.