The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design.
Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn’t relate abstract mathematical self and the substrate that approximately computes it’s abstract mathematical self; it can’t care about the survival of the physical system that approximately computes it; it can’t care to avoid being shut down. It’s neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by ‘general’ concepts that SI thinks in terms of, like SI’s oracle.
So, if you’re going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.
Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.
In your comment you talk about safety, so I assume you agree there is some sort of “AI danger”, you just think SI has lots of the details wrong.
I am trying to put it in the way for people whom are concerned about the AI risk. I don’t think there’s actual danger because I don’t see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it’d be dangerous. E.g. to self preserve, AI must relate it’s abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.
My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with “rationality”.
I am not sure about what basics are right. The very basic concept here is “utility function”, which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.
Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn’t relate abstract mathematical self and the substrate that approximately computes it’s abstract mathematical self; it can’t care about the survival of the physical system that approximately computes it; it can’t care to avoid being shut down. It’s neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by ‘general’ concepts that SI thinks in terms of, like SI’s oracle.
Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.
I am trying to put it in the way for people whom are concerned about the AI risk. I don’t think there’s actual danger because I don’t see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it’d be dangerous. E.g. to self preserve, AI must relate it’s abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.
I am not sure about what basics are right. The very basic concept here is “utility function”, which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.