Sure. At the beginning, as it develops, sure (99%).
At the beginning everything will be nice, I guess. It will only consume computing power and slowly develop. Little baby steps.
I can watch how memory cells switch from 0 to 1 and maybe back. I can maybe play some little games as test, to see how it develops.
But the question when to stop. Will I be able to realize the moment when and how it starts to go bad? Before it’s to late? When will it be? When will it be to late?
The developing speed depends on how many cores and computers works on the development. But to see if it develops in the “right” way (whatever the “right” way is), I need to let it to develop.
But what if it develops curiosity? …and it will. What if it needs more calculating power? What If I say no and try prevent it? Should I?
About all this questions I was already thinking, and it was always a remaining risk. The probability is small, but not zero. The possible risk is immensely.
And, as far as it develops, more an more people will be involved.
More people are more opinions, beliefs, needs, fears and desires. Corporations will show interest and governments will express their concerns and desire to participate. Contracts will be made and laws will be passed. Interests will be served. At what point should I pull the plug? How long will I be able to do it? Won’t it be too late then?
Shouldn’t we talk now about it?
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.