A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.
Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn’t be maximizing.
And a third point is that if it’s possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it’s very important to ensure that they don’t grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it’s too late, or gold ingot manufacture won’t get maximized.
Are you saying that because gold can be produced that energy is always going to be the limiting factor in goal maximization? That was an example, not a proof. The point was that unless energy was the limiting factor in meeting a goal, you wouldn’t expect an arbitrarily intelligent AI to try to scrape up all available energy.
Earlier tonight, I had a goal of obtaining a sandwich. There is no way of obtaining a sandwich that involves harnessing all the free energy of our sun or expansion into other solar systems that would be more efficient than simply going to a sandwich shop and buying one, thus any arbitrarily intelligent AI would not do those things if it took on the efficient obtainment of my sandwich as a goal. Again, this is just an example that is meant to show that the mere existence of AI does not necessarily require an AI to “turn off stars” as James_Miller was saying you’d expect to see “for almost any goal or utility function than an AI had.”
To be absolutely certain that you obtain that sandwich, that it’s a genuine sandwich, that no-one steals it from you, that you cana lways make a replacement if this one goes bad or quantum tunnels, etc… you need to grab the universe.
Grabbing the universe only adds a tiny, tiny bit of extra expected utility, but since there is no utility drawback to doing so, AIs will often be motivated to do so. Bounded utility doesn’t save you (though bounded satisficing would, but that’s not stable http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/ ).
That seems safer (and is one of the methods we recommended in our paper on Oracles). There ware ways to make this misbehave as well, but they’re more complex and less intuitive.
Eg: The easiest way this would go wrong is if the AI is still around after the deadline, and now spends its effort taking over the universe in order to probe basic physics and maybe discover time travel to go back and accomplish its function.
A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.
Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn’t be maximizing.
And a third point is that if it’s possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it’s very important to ensure that they don’t grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it’s too late, or gold ingot manufacture won’t get maximized.
Are you saying that because gold can be produced that energy is always going to be the limiting factor in goal maximization? That was an example, not a proof. The point was that unless energy was the limiting factor in meeting a goal, you wouldn’t expect an arbitrarily intelligent AI to try to scrape up all available energy.
Earlier tonight, I had a goal of obtaining a sandwich. There is no way of obtaining a sandwich that involves harnessing all the free energy of our sun or expansion into other solar systems that would be more efficient than simply going to a sandwich shop and buying one, thus any arbitrarily intelligent AI would not do those things if it took on the efficient obtainment of my sandwich as a goal. Again, this is just an example that is meant to show that the mere existence of AI does not necessarily require an AI to “turn off stars” as James_Miller was saying you’d expect to see “for almost any goal or utility function than an AI had.”
To be absolutely certain that you obtain that sandwich, that it’s a genuine sandwich, that no-one steals it from you, that you cana lways make a replacement if this one goes bad or quantum tunnels, etc… you need to grab the universe.
Grabbing the universe only adds a tiny, tiny bit of extra expected utility, but since there is no utility drawback to doing so, AIs will often be motivated to do so. Bounded utility doesn’t save you (though bounded satisficing would, but that’s not stable http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/ ).
OK. Replace “efficient” with quick. Getting me a sandwich within a short amount of time precludes being able to take over the universe.
That seems safer (and is one of the methods we recommended in our paper on Oracles). There ware ways to make this misbehave as well, but they’re more complex and less intuitive.
Eg: The easiest way this would go wrong is if the AI is still around after the deadline, and now spends its effort taking over the universe in order to probe basic physics and maybe discover time travel to go back and accomplish its function.