100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn’t need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.
Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn’t, I find the concept “unlimited power” poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?
So, just like Robin Hanson says, we shouldn’t spend time on this problem. We will solve in the best possible way with our unlimited power as soon as we have got unlimited power. We can be sure the solution will be wonderful and perfect.
Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn’t, I find the concept “unlimited power” poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?
The entire point of this was an analogy for creating Friendly AI. The AI would have absurd amounts of power, but we have to decide what we want it to do using our limited human intelligence.
I suppose you could just ask the AI for more intelligence first, but even that isn’t a trivial problem. Would it be ok to alter your mind in such a way that it changes your personality or your values? Is it possible to increase your intelligence without doing that? And tons of other issues trying to specify such a specific goal.
“Eliezer, I’d advise no sudden moves; think very carefully before doing anything.”
But about 100 people die every minute!
100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn’t need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.
Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn’t, I find the concept “unlimited power” poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?
So, just like Robin Hanson says, we shouldn’t spend time on this problem. We will solve in the best possible way with our unlimited power as soon as we have got unlimited power. We can be sure the solution will be wonderful and perfect.
The entire point of this was an analogy for creating Friendly AI. The AI would have absurd amounts of power, but we have to decide what we want it to do using our limited human intelligence.
I suppose you could just ask the AI for more intelligence first, but even that isn’t a trivial problem. Would it be ok to alter your mind in such a way that it changes your personality or your values? Is it possible to increase your intelligence without doing that? And tons of other issues trying to specify such a specific goal.