I’m talking to what I see as a rather dangerous cult in the making. Some of that makes people think. There are people here whom are not gurus or nuts but simply misled.
That’s an interesting notion. If I can understand the cult thing (if not agreeing), but I what do you have in mind that makes LW stuff ‘dangerous’ but also not true?
It being a doomsday prophecy cult, essentially. Some day, the world will get close to implementing AGI, and you guys will seriously believe we’re all going to die b/c none of the silly and useless (as well as irrelevant) philosophical nonsense ever was a part of design; the safety happening in some way that is quite well beyond understanding of the minds that are unaccustomed to dealing with subtleties and details (beyond referring to those in handwaving). I’m pretty sure something very stupid will be done.
Stupid like attempted sabotage. Keep in mind we’re talking of folks whom can’t keep their cool when someone thinks through a decision theory of their own to arrive at a creepy conclusion. (the link that you are not recommended to read) And before then, a lot of stupid in form of going around associating safety concerns with crankery, which probably won’t matter but may matter if at some point someone sees some actual danger (as opposed to reading stuff off science fiction by Vinge) and measures have to be implemented for good reasons. (BTW, from the wikipedia: “Although a crank’s beliefs seem ridiculous to experts in the field, cranks are sometimes very successful in convincing non-experts of their views. A famous example is the Indiana Pi Bill where a state legislature nearly wrote into law a crank result in geometry.”)
I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.
The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design. If you think of an AGI’s capabilities as vaguely like the capabilities of a human being, then the appearance of an AI in the world is just like adding one person to a world that already contains 7 billion persons. It might be a historic development, but not an apocalyptic one. And that is indeed how it should turn out, for a large class of possible AIs.
But in a world with AIs, eventually you will have someone or something go down a path that leads, whether by accident or by design, to AI, AI networks, or human-AI networks, that are effectively working to take over the world. A computer virus is a primitive example of software that runs as wild as it can within its environment. There was no law of nature which protected us from having to deal with a world of computer viruses, and there can’t be any law of nature which means we’ll never have to deal with would-be hegemonic AIs, because trying to take over the world is already cognitively possible for mere humans.
So, if you’re going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.
In your comment you talk about safety, so I assume you agree there is some sort of “AI danger”, you just think SI has lots of the details wrong. My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with “rationality”.
The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design.
Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn’t relate abstract mathematical self and the substrate that approximately computes it’s abstract mathematical self; it can’t care about the survival of the physical system that approximately computes it; it can’t care to avoid being shut down. It’s neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by ‘general’ concepts that SI thinks in terms of, like SI’s oracle.
So, if you’re going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.
Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.
In your comment you talk about safety, so I assume you agree there is some sort of “AI danger”, you just think SI has lots of the details wrong.
I am trying to put it in the way for people whom are concerned about the AI risk. I don’t think there’s actual danger because I don’t see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it’d be dangerous. E.g. to self preserve, AI must relate it’s abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.
My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with “rationality”.
I am not sure about what basics are right. The very basic concept here is “utility function”, which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.
I’m talking to what I see as a rather dangerous cult in the making. Some of that makes people think. There are people here whom are not gurus or nuts but simply misled.
That’s an interesting notion. If I can understand the cult thing (if not agreeing), but I what do you have in mind that makes LW stuff ‘dangerous’ but also not true?
It being a doomsday prophecy cult, essentially. Some day, the world will get close to implementing AGI, and you guys will seriously believe we’re all going to die b/c none of the silly and useless (as well as irrelevant) philosophical nonsense ever was a part of design; the safety happening in some way that is quite well beyond understanding of the minds that are unaccustomed to dealing with subtleties and details (beyond referring to those in handwaving). I’m pretty sure something very stupid will be done.
Stupid like major world event or stupid like minor daily news story?
Stupid like attempted sabotage. Keep in mind we’re talking of folks whom can’t keep their cool when someone thinks through a decision theory of their own to arrive at a creepy conclusion. (the link that you are not recommended to read) And before then, a lot of stupid in form of going around associating safety concerns with crankery, which probably won’t matter but may matter if at some point someone sees some actual danger (as opposed to reading stuff off science fiction by Vinge) and measures have to be implemented for good reasons. (BTW, from the wikipedia: “Although a crank’s beliefs seem ridiculous to experts in the field, cranks are sometimes very successful in convincing non-experts of their views. A famous example is the Indiana Pi Bill where a state legislature nearly wrote into law a crank result in geometry.”)
I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.
The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design. If you think of an AGI’s capabilities as vaguely like the capabilities of a human being, then the appearance of an AI in the world is just like adding one person to a world that already contains 7 billion persons. It might be a historic development, but not an apocalyptic one. And that is indeed how it should turn out, for a large class of possible AIs.
But in a world with AIs, eventually you will have someone or something go down a path that leads, whether by accident or by design, to AI, AI networks, or human-AI networks, that are effectively working to take over the world. A computer virus is a primitive example of software that runs as wild as it can within its environment. There was no law of nature which protected us from having to deal with a world of computer viruses, and there can’t be any law of nature which means we’ll never have to deal with would-be hegemonic AIs, because trying to take over the world is already cognitively possible for mere humans.
So, if you’re going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.
In your comment you talk about safety, so I assume you agree there is some sort of “AI danger”, you just think SI has lots of the details wrong. My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with “rationality”.
Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn’t relate abstract mathematical self and the substrate that approximately computes it’s abstract mathematical self; it can’t care about the survival of the physical system that approximately computes it; it can’t care to avoid being shut down. It’s neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by ‘general’ concepts that SI thinks in terms of, like SI’s oracle.
Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.
I am trying to put it in the way for people whom are concerned about the AI risk. I don’t think there’s actual danger because I don’t see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it’d be dangerous. E.g. to self preserve, AI must relate it’s abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.
I am not sure about what basics are right. The very basic concept here is “utility function”, which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.