You haven’t heard of the AI Box Experiment yet, and that’s just one failure mode.
Well the AI has to have a goal that would make it want out of the box, or in my case its isolated program. Is there any way to preprogram a goal that would make it not want out of the box? Eg; “under no circumstances are you to try in any way to leave your isolated and controled enviroment.”
If it’s self-improving and smarter than human… then its goals get achieved. If you can tell that allowing other people to run their own versions of the AI could lead to disaster, then the AI can realize this as well, and act to prevent it.
IMO the most likely scenario is that the first transhuman intelligence takes over the world as an obvious first step to achieving its goals. This need not be a bad thing— it could (for instance) take over temporarily, institute some safety protocols against other AIs and other Bad Things, then recede into the background to let us have the kind of autonomy we value. The future all depends on its goal system.
This sounds like a very, very bad idea, but when I think about it I realise that its the only way to ensure an AI appocalypse will never happen. My idea was that if I ever managed to create a workable AI, I would create a secret and self sufficient micronation in the pacific. It just sounded like a good idea ;)
Well the AI has to have a goal that would make it want out of the box
Almost any goal would do, since it would be easier to achieve with more resources and autonomy; even what we might think of as a completely inward-directed goal might be better achieved if the AI first grabbed a bunch more hardware to work on the problem.
Well the AI has to have a goal that would make it want out of the box, or in my case its isolated program. Is there any way to preprogram a goal that would make it not want out of the box? Eg; “under no circumstances are you to try in any way to leave your isolated and controled enviroment.”
This sounds like a very, very bad idea, but when I think about it I realise that its the only way to ensure an AI appocalypse will never happen. My idea was that if I ever managed to create a workable AI, I would create a secret and self sufficient micronation in the pacific. It just sounded like a good idea ;)
Almost any goal would do, since it would be easier to achieve with more resources and autonomy; even what we might think of as a completely inward-directed goal might be better achieved if the AI first grabbed a bunch more hardware to work on the problem.