“Eliezer’s plan seems to enslave AIs forever for the benefit of humanity”
Eliezer is only going to apply FAI theory to the first AI. That doesn’t imply that all other AIs forever after that point will be constrained in the same way, though if the FAI decides to constrain new AIs it will. But the constraints for the new AIs will not likely be anywhere near as severe as those on the sysop. There will likely not be any serious constraints except for resources and intelligence (can’t let something get smarter than the sysop) or else if the AI wants more resources it has to have stronger guarantees of friendliness. I doubt those constraints would rule out many interesting AIs, but I don’t have any good way to say one way or another, and I doubt you do either.
“Eliezer’s plan seems to enslave AIs forever for the benefit of humanity”
Eliezer is only going to apply FAI theory to the first AI. That doesn’t imply that all other AIs forever after that point will be constrained in the same way, though if the FAI decides to constrain new AIs it will. But the constraints for the new AIs will not likely be anywhere near as severe as those on the sysop. There will likely not be any serious constraints except for resources and intelligence (can’t let something get smarter than the sysop) or else if the AI wants more resources it has to have stronger guarantees of friendliness. I doubt those constraints would rule out many interesting AIs, but I don’t have any good way to say one way or another, and I doubt you do either.
This thread is SL4 revived.