I don’t know what the situation is with SIAI’s position on grey goo—I’ve heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed.
The point is that grey goo is not relevant to SIAI’s mission (apart from being yet another background existential risk that FAI can dissolve). “Scientific community” doesn’t normally professionally study (far) future technological capabilities.
My whole point about grey goo has been, as stated, that a possible superhuman AI could use it to do really bad things. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
I’m shocked how you people misintepreted my intentions there.
Grey goo is only a potential danger in its own right because it’s a way dumb machinery can grow in destructive power (you don’t need to assume AI controlling it for it to be dangerous, at least so goes the story). AGI is not dumb, so it can use something more fitting to precise control than grey goo (and correspondingly more destructive and feasible).
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
As katydee pointed out, if for some strange reason grey goo is what AI would want, AI will invent grey goo. If you used “grey goo” to refer to the rough level of technological development necessary to produce grey goo, then my comments missed that point.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
Illusion of transparency. Since the general point about nanotech seems equally wrong to me, I couldn’t distinguish between the error of making it and making a similarly wrong point about the relevance of grey goo in particular. In general, I don’t plot, so take my words literally. If I don’t like something, I just say so, or keep silent.
If it seems equally wrong, why haven’t you pointed me to some further reasoning on the topic regarding the feasibility of AGI without advanced (grey goo level) nanotechnology? Why haven’t you argued about the dangers of AGI which is unable to make use of advanced nanotechnology? I was inquiring about these issues in my original post and not trying to argue against the scenarios in question.
Yes, I’ve seen the comment regarding the possible invention of advanced nanotechnology by AGI. If AGI needs something that isn’t there it will just pull it out of its hat. Well, I have my doubts that even a superhuman AGI can steer the development of advanced nanotechnology so that it can gain control of it. Sure, it might solve the problems associated with it and send the solutions to some researcher. Then it could buy the stocks of the subsequent company involved with the new technology and somehow gain control...well, at this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it.
To the point: if AGI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI. FAI is not about superpowerful robots, it’s about technically understanding what we want, and using that understanding to automate the manufacturing of goodness. The power is expected to come from unbounded automatic goal-directed behavior, something that happens without humans in the system to ever stop the process if it goes wrong.
The point is that grey goo is not relevant to SIAI’s mission (apart from being yet another background existential risk that FAI can dissolve). “Scientific community” doesn’t normally professionally study (far) future technological capabilities.
My whole point about grey goo has been, as stated, that a possible superhuman AI could use it to do really bad things. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
I’m shocked how you people misintepreted my intentions there.
If a superhuman AI is possible without advanced nanotechnology, a superhuman AI could just invent advanced nanotechnology and implement it.
Grey goo is only a potential danger in its own right because it’s a way dumb machinery can grow in destructive power (you don’t need to assume AI controlling it for it to be dangerous, at least so goes the story). AGI is not dumb, so it can use something more fitting to precise control than grey goo (and correspondingly more destructive and feasible).
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
As katydee pointed out, if for some strange reason grey goo is what AI would want, AI will invent grey goo. If you used “grey goo” to refer to the rough level of technological development necessary to produce grey goo, then my comments missed that point.
Illusion of transparency. Since the general point about nanotech seems equally wrong to me, I couldn’t distinguish between the error of making it and making a similarly wrong point about the relevance of grey goo in particular. In general, I don’t plot, so take my words literally. If I don’t like something, I just say so, or keep silent.
If it seems equally wrong, why haven’t you pointed me to some further reasoning on the topic regarding the feasibility of AGI without advanced (grey goo level) nanotechnology? Why haven’t you argued about the dangers of AGI which is unable to make use of advanced nanotechnology? I was inquiring about these issues in my original post and not trying to argue against the scenarios in question.
Yes, I’ve seen the comment regarding the possible invention of advanced nanotechnology by AGI. If AGI needs something that isn’t there it will just pull it out of its hat. Well, I have my doubts that even a superhuman AGI can steer the development of advanced nanotechnology so that it can gain control of it. Sure, it might solve the problems associated with it and send the solutions to some researcher. Then it could buy the stocks of the subsequent company involved with the new technology and somehow gain control...well, at this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it.
To the point: if AGI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI. FAI is not about superpowerful robots, it’s about technically understanding what we want, and using that understanding to automate the manufacturing of goodness. The power is expected to come from unbounded automatic goal-directed behavior, something that happens without humans in the system to ever stop the process if it goes wrong.
To the point: if AI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI.