A fractal, self-organizing digital organism that learns at geometric speed.
What can go wrong when it is calculated in a cloud on several computers around the world?
The software needs a higher computing capacity, so maybe I’ll book some in the cloud. If everything works the way I expect, everything is OK. If it doesn’t work, that’s OK too. But what if things go better than expected?
There are enough papers that deal with exactly this risk. I don’t have to quote them here. You can find enough information on the net. With a “stop” button or switch you won’t get far ..
Google’s AI has found glitches in games. And this AI can only play Atari games.
In addition, if it runs on a computer that can be reached in the network, the algorithm can be stolen and easily reprogrammed into a virus (actually a chained worm, but that doesn’t matter now). The algorithm is designed so that you can define any (formally definable) goal. In the wrong hands, this can cause considerable damage.
Maybe I shouldn’t worry so much and just try it out?
(I know, my English isn’t perfect. I apologize for that.)
Sure. At the beginning, as it develops, sure (99%).
At the beginning everything will be nice, I guess. It will only consume computing power and slowly develop. Little baby steps.
I can watch how memory cells switch from 0 to 1 and maybe back. I can maybe play some little games as test, to see how it develops.
But the question when to stop. Will I be able to realize the moment when and how it starts to go bad? Before it’s to late? When will it be? When will it be to late?
The developing speed depends on how many cores and computers works on the development. But to see if it develops in the “right” way (whatever the “right” way is), I need to let it to develop.
But what if it develops curiosity? …and it will. What if it needs more calculating power? What If I say no and try prevent it? Should I?
About all this questions I was already thinking, and it was always a remaining risk. The probability is small, but not zero. The possible risk is immensely.
And, as far as it develops, more an more people will be involved.
More people are more opinions, beliefs, needs, fears and desires. Corporations will show interest and governments will express their concerns and desire to participate. Contracts will be made and laws will be passed. Interests will be served. At what point should I pull the plug? How long will I be able to do it? Won’t it be too late then?
I don’t know when you should stop. All I’m suggesting is that you not turn it on, without a time on which it is supposed to (automatically) switch off. In other words, you should stop it regularly, over and over again. This has the benefit of letting you consider the new information you have received, and decide how to respond to it. Perhaps your design will be “flawed”—and won’t have the risk of going ‘foom’ that you think it will (without further work to revise and change it—by you, before it is capable of ‘improving’). If you decide that it is risky, then the ‘intervention’ isn’t turning it off—it’s just not deciding to turn it back on (which maybe shouldn’t be automatic).
What is the setup where you can’t switch it off? That it might find a way to disable that capability, or are you worried about something else?
So much bad karma....
That’s a good question.
A fractal, self-organizing digital organism that learns at geometric speed.
What can go wrong when it is calculated in a cloud on several computers around the world?
The software needs a higher computing capacity, so maybe I’ll book some in the cloud. If everything works the way I expect, everything is OK. If it doesn’t work, that’s OK too. But what if things go better than expected?
There are enough papers that deal with exactly this risk. I don’t have to quote them here. You can find enough information on the net. With a “stop” button or switch you won’t get far ..
Google’s AI has found glitches in games. And this AI can only play Atari games.
In addition, if it runs on a computer that can be reached in the network, the algorithm can be stolen and easily reprogrammed into a virus (actually a chained worm, but that doesn’t matter now). The algorithm is designed so that you can define any (formally definable) goal. In the wrong hands, this can cause considerable damage.
Maybe I shouldn’t worry so much and just try it out?
(I know, my English isn’t perfect. I apologize for that.)
Can you run it for a while and then stop it?
Sure. At the beginning, as it develops, sure (99%).
At the beginning everything will be nice, I guess. It will only consume computing power and slowly develop. Little baby steps.
I can watch how memory cells switch from 0 to 1 and maybe back. I can maybe play some little games as test, to see how it develops.
But the question when to stop. Will I be able to realize the moment when and how it starts to go bad? Before it’s to late? When will it be? When will it be to late?
The developing speed depends on how many cores and computers works on the development. But to see if it develops in the “right” way (whatever the “right” way is), I need to let it to develop.
But what if it develops curiosity? …and it will. What if it needs more calculating power? What If I say no and try prevent it? Should I?
About all this questions I was already thinking, and it was always a remaining risk. The probability is small, but not zero. The possible risk is immensely.
And, as far as it develops, more an more people will be involved.
More people are more opinions, beliefs, needs, fears and desires. Corporations will show interest and governments will express their concerns and desire to participate. Contracts will be made and laws will be passed. Interests will be served. At what point should I pull the plug? How long will I be able to do it? Won’t it be too late then?
Shouldn’t we talk now about it?
I don’t know when you should stop. All I’m suggesting is that you not turn it on, without a time on which it is supposed to (automatically) switch off. In other words, you should stop it regularly, over and over again. This has the benefit of letting you consider the new information you have received, and decide how to respond to it. Perhaps your design will be “flawed”—and won’t have the risk of going ‘foom’ that you think it will (without further work to revise and change it—by you, before it is capable of ‘improving’). If you decide that it is risky, then the ‘intervention’ isn’t turning it off—it’s just not deciding to turn it back on (which maybe shouldn’t be automatic).