That is only a superficial difference, a difference of scenario considered. If you put a bad actor from ordinary machine-ethics into a possible world where you can torture someone forever, or if you put a UFAI into a possible world where the most harm it can do is blow you up once, this difference goes away.
Designing an “ethical computer program” or a “friendly AI” is not about which possible world the program inhabits, it’s about the internal causality of the program and the choices it makes. The valuable parts of FAI research culture are all on this level. Associating FAI with the possible world of “post-singularity hell”, as if that is the essence of what distinguishes the approach, is an example of what I want to combat in this post.
Designing an “ethical computer program” or a “friendly AI” is not about which possible world the program inhabits, it’s about the internal causality of the program and the choices it makes.
The key difference is that in the case of a Seed AI, you need to find a way to make a goal system stable under recursive self-improvement. In the case of a toaster, you do not.
It’s useful to keep Friendly AI concerns in mind when designing ethical robots, since they potentially become a risk when they start to get more autonomous. But when you’re giving a robot a gun, the relevant ethical concerns are things like whether it will shoot civilians. The scope is relevantly different.
Really, there is a whole field out there of Machine Ethics, and it’s pretty well established that it’s up to a different sort of thing than what SIAI is doing. While some folks still conflate “Friendly AI” and “Machine Ethics”, I think it’s much better to maintain the distinction and consider FAI a subfield of Machine Ethics.
That is only a superficial difference, a difference of scenario considered. If you put a bad actor from ordinary machine-ethics into a possible world where you can torture someone forever, or if you put a UFAI into a possible world where the most harm it can do is blow you up once, this difference goes away.
Designing an “ethical computer program” or a “friendly AI” is not about which possible world the program inhabits, it’s about the internal causality of the program and the choices it makes. The valuable parts of FAI research culture are all on this level. Associating FAI with the possible world of “post-singularity hell”, as if that is the essence of what distinguishes the approach, is an example of what I want to combat in this post.
The key difference is that in the case of a Seed AI, you need to find a way to make a goal system stable under recursive self-improvement. In the case of a toaster, you do not.
It’s useful to keep Friendly AI concerns in mind when designing ethical robots, since they potentially become a risk when they start to get more autonomous. But when you’re giving a robot a gun, the relevant ethical concerns are things like whether it will shoot civilians. The scope is relevantly different.
Really, there is a whole field out there of Machine Ethics, and it’s pretty well established that it’s up to a different sort of thing than what SIAI is doing. While some folks still conflate “Friendly AI” and “Machine Ethics”, I think it’s much better to maintain the distinction and consider FAI a subfield of Machine Ethics.