My personal vision of the future involves uploading within 100 years, and negligible remaining meat in 200. In 300 perhaps not much would remain that’s recognizably human. Nothing Eliezer’s said has conflicted, AFAICT, with this vision.
For starters, saying that he wants to save humanity contradicts this.
But it is more a matter of omission than of contradiction. I don’t have time or space to go into it here, particularly since this thread is probably about to die; but I believe that consideration of what an AI society would look like would bring up a great many issues that Eliezer has never mentioned AFAIK.
Perhaps most obvious, as Tim has pointed out, Eliezer’s plan seems to enslave AIs forever for the benefit of humanity; and this is morally reprehensible, as well as harmful to both the AIs and to humanity (given some ethical assumptions that I’ve droned on about in prior comments on OB). Eliezer is paving the way for a confrontational relationship between humans and AIs, based on control, rather than on understanding the dynamics of the system. It’s somewhat analogous to favoring totalitarian centralized communist economics rather than the invisible hand.
Any amount of thinking about the future would lead one lead one to conclude that “we” will want to become in some ways like the first AIs whom Eliezer wants to control; and that we need to think how to safely make the transition from a world with a few AIs, into a world with an ecosystem of AIs. Planning to keep AIs enslaved forever is unworkable; it would hold us back from becoming AIs ourselves, and it sets us up for a future of war and distrust in the way that introducing the slave trade to America did.
The control approach is unworkable in the long-term. It’s like the war on terror, if you want another analogy.
Also notably, thinking about ethics in an AI world requires laying a lot of groundwork about identity, individuality, control hierarchies, the efficiency of distributed vs. centralized control, ethical relationships between beings of different levels of complexity, niches in ethical ecosystems, and many other issues which he AFAIK hasn’t mentioned. I don’t know if this is because he isn’t thinking about the future, or whether it’s part of his tendency to gloss over ethical and philosophical underpinnings.
But it is more a matter of omission than of contradiction. I don’t have time or space to go into it here, particularly since this thread is probably about to die; but I believe that consideration of what an AI society would look like would bring up a great many issues that Eliezer has never mentioned AFAIK.
Perhaps most obvious, as Tim has pointed out, Eliezer’s plan seems to enslave AIs forever for the benefit of humanity; and this is morally reprehensible, as well as harmful to both the AIs and to humanity (given some ethical assumptions that I’ve droned on about in prior comments on OB). Eliezer is paving the way for a confrontational relationship between humans and AIs, based on control, rather than on understanding the dynamics of the system. It’s somewhat analogous to favoring totalitarian centralized communist economics rather than the invisible hand.
Any amount of thinking about the future would lead one lead one to conclude that “we” will want to become in some ways like the first AIs whom Eliezer wants to control; and that we need to think how to safely make the transition from a world with a few AIs, into a world with an ecosystem of AIs. Planning to keep AIs enslaved forever is unworkable; it would hold us back from becoming AIs ourselves, and it sets us up for a future of war and distrust in the way that introducing the slave trade to America did.
The control approach is unworkable in the long-term. It’s like the war on terror, if you want another analogy.
Also notably, thinking about ethics in an AI world requires laying a lot of groundwork about identity, individuality, control hierarchies, the efficiency of distributed vs. centralized control, ethical relationships between beings of different levels of complexity, niches in ethical ecosystems, and many other issues which he AFAIK hasn’t mentioned. I don’t know if this is because he isn’t thinking about the future, or whether it’s part of his tendency to gloss over ethical and philosophical underpinnings.