Designing an “ethical computer program” or a “friendly AI” is not about which possible world the program inhabits, it’s about the internal causality of the program and the choices it makes.
The key difference is that in the case of a Seed AI, you need to find a way to make a goal system stable under recursive self-improvement. In the case of a toaster, you do not.
It’s useful to keep Friendly AI concerns in mind when designing ethical robots, since they potentially become a risk when they start to get more autonomous. But when you’re giving a robot a gun, the relevant ethical concerns are things like whether it will shoot civilians. The scope is relevantly different.
Really, there is a whole field out there of Machine Ethics, and it’s pretty well established that it’s up to a different sort of thing than what SIAI is doing. While some folks still conflate “Friendly AI” and “Machine Ethics”, I think it’s much better to maintain the distinction and consider FAI a subfield of Machine Ethics.
The key difference is that in the case of a Seed AI, you need to find a way to make a goal system stable under recursive self-improvement. In the case of a toaster, you do not.
It’s useful to keep Friendly AI concerns in mind when designing ethical robots, since they potentially become a risk when they start to get more autonomous. But when you’re giving a robot a gun, the relevant ethical concerns are things like whether it will shoot civilians. The scope is relevantly different.
Really, there is a whole field out there of Machine Ethics, and it’s pretty well established that it’s up to a different sort of thing than what SIAI is doing. While some folks still conflate “Friendly AI” and “Machine Ethics”, I think it’s much better to maintain the distinction and consider FAI a subfield of Machine Ethics.