Eliezer, I understand the logic of what you are saying. If AI is an existential threat, then only FriendlyAI can save us. Since any self-improving AI can become quickly unstoppable, FriendlyAI must be developed first and deployed as soon as it is developed. The team that developed it would in fact have a moral imperative to deploy it without risking consultation from anyone else.
I assume you also understand where I’m coming from. Out here in the “normal” world, you sound like a zealot who would destroy the human race in order to save it. Anyone who has implemented a large software project would laugh at the idea of coming up with a proven correct meta-goal, stable under all possible evolutions of an AI, also implemented provably correctly.
The idea of a goal (or even a meta-goal) that we can all agree on strikes me as absurd. The idea hitting the start button on something that could destroy the human race, based on nothing more than pages of logic, would be considered ridiculous by practically every member of the human race.
I understand if you think you are right about all of this, and don’t need to listen to or even react to criticism. In that case, why do you blog? Why do you waste your time answering comments? Why aren’t you out working on FriendlyAI for as many hours as you can manage?
And if this is an existential threat, are the Luddites right? Wouldn’t the best tactic for extending the life of the human race be to kill all AI and nanotech researchers?
Tim, there are neural simulation projects underway already. I think there are a large number of nerds who would consider becoming uploads. I don’t see why you think this makes no sense. And when you say “once we have AI”, what do you mean? AI covers a lot of territory. Do you just mean some little problem solving box, or what?
Eliezer, I understand the logic of what you are saying. If AI is an existential threat, then only FriendlyAI can save us. Since any self-improving AI can become quickly unstoppable, FriendlyAI must be developed first and deployed as soon as it is developed. The team that developed it would in fact have a moral imperative to deploy it without risking consultation from anyone else.
I assume you also understand where I’m coming from. Out here in the “normal” world, you sound like a zealot who would destroy the human race in order to save it. Anyone who has implemented a large software project would laugh at the idea of coming up with a proven correct meta-goal, stable under all possible evolutions of an AI, also implemented provably correctly.
The idea of a goal (or even a meta-goal) that we can all agree on strikes me as absurd. The idea hitting the start button on something that could destroy the human race, based on nothing more than pages of logic, would be considered ridiculous by practically every member of the human race.
I understand if you think you are right about all of this, and don’t need to listen to or even react to criticism. In that case, why do you blog? Why do you waste your time answering comments? Why aren’t you out working on FriendlyAI for as many hours as you can manage?
And if this is an existential threat, are the Luddites right? Wouldn’t the best tactic for extending the life of the human race be to kill all AI and nanotech researchers?
Tim, there are neural simulation projects underway already. I think there are a large number of nerds who would consider becoming uploads. I don’t see why you think this makes no sense. And when you say “once we have AI”, what do you mean? AI covers a lot of territory. Do you just mean some little problem solving box, or what?