Okay, I can see how XiXiDu’s post might come across that way. I think I can clarify what I think that XiXiDu is trying to get at by asking some better questions of my own.
What evidence has SIAI presented that the Singularity is near?
If the Singularity is near then why has the scientific community missed this fact?
What evidence has SIAI presented for the existence of grey goo technology?
If grey goo technology is feasible then why has the scientific community missed this fact?
Assuming that the Singularity is near, what evidence is there that SIAI has a chance to lower global catastrophic risk in a nontrivial way?
What evidence is there that SIAI has room for more funding?
“Near”? Where’d we say that? What’s “near”? XiXiDu thinks we’re Kurzweil?
What kind of evidence would you want aside from a demonstrated Singularity?
Grey goo? Huh? What’s that got to do with us? Read Nanosystems by Eric Drexler or Freitas on “global ecophagy”. XiXiDu thinks we’re Foresight?
If this business about “evidence” isn’t a demand for particular proof, then what are you looking for besides not-further-confirmed straight-line extrapolations from inductive generalizations supported by evidence?
“Near”? Where’d we say that? What’s “near”? XiXiDu thinks we’re Kurzweil?
You’ve claimed that in your blogging heads divlog with Scott Aaronson that you think that it’s pretty obvious that there will be an AGI within the next century. As far as I know you have not offered a detailed description of the reasoning that led you to this conclusion that can be checked by others.
I see this as significant for the reasons given in my comment here.
Grey goo? Huh? What’s that got to do with us? Read Nanosystems by Eric Drexler or Freitas on “global ecophagy”. XiXiDu thinks we’re Foresight?
I don’t know what the situation is with SIAI’s position on grey goo—I’ve heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed. So let’s forget about about questions 3 and 4.
You’ve claimed that in your blogging heads divlog with Scott Aaronson that you think that it’s pretty obvious that there will be an AGI within the next century.
You’ve shifted the question from “is SIAI on balance worth donating to” to “should I believe everything Eliezer has ever said”.
I don’t know what the situation is with SIAI’s position on grey goo—I’ve heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed.
The point is that grey goo is not relevant to SIAI’s mission (apart from being yet another background existential risk that FAI can dissolve). “Scientific community” doesn’t normally professionally study (far) future technological capabilities.
My whole point about grey goo has been, as stated, that a possible superhuman AI could use it to do really bad things. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
I’m shocked how you people misintepreted my intentions there.
Grey goo is only a potential danger in its own right because it’s a way dumb machinery can grow in destructive power (you don’t need to assume AI controlling it for it to be dangerous, at least so goes the story). AGI is not dumb, so it can use something more fitting to precise control than grey goo (and correspondingly more destructive and feasible).
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
As katydee pointed out, if for some strange reason grey goo is what AI would want, AI will invent grey goo. If you used “grey goo” to refer to the rough level of technological development necessary to produce grey goo, then my comments missed that point.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
Illusion of transparency. Since the general point about nanotech seems equally wrong to me, I couldn’t distinguish between the error of making it and making a similarly wrong point about the relevance of grey goo in particular. In general, I don’t plot, so take my words literally. If I don’t like something, I just say so, or keep silent.
If it seems equally wrong, why haven’t you pointed me to some further reasoning on the topic regarding the feasibility of AGI without advanced (grey goo level) nanotechnology? Why haven’t you argued about the dangers of AGI which is unable to make use of advanced nanotechnology? I was inquiring about these issues in my original post and not trying to argue against the scenarios in question.
Yes, I’ve seen the comment regarding the possible invention of advanced nanotechnology by AGI. If AGI needs something that isn’t there it will just pull it out of its hat. Well, I have my doubts that even a superhuman AGI can steer the development of advanced nanotechnology so that it can gain control of it. Sure, it might solve the problems associated with it and send the solutions to some researcher. Then it could buy the stocks of the subsequent company involved with the new technology and somehow gain control...well, at this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it.
To the point: if AGI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI. FAI is not about superpowerful robots, it’s about technically understanding what we want, and using that understanding to automate the manufacturing of goodness. The power is expected to come from unbounded automatic goal-directed behavior, something that happens without humans in the system to ever stop the process if it goes wrong.
Overall I’d feel a lot more comfortable if you just said “there’s a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don’t know whether or not I’m on the right track in focusing on Friendly AI or whether I’m right about when the Singularity will occur, I’m just doing the best that I can.”
This is largely because of the issue that I raise here
I should emphasize that I don’t think that you’d ever knowingly do something that raised existential risk, I think that you’re a kind and noble spirit. But I do think I’m raising a serious issue which you’ve missed.
If this business about “evidence” isn’t a demand for particular proof, then what are you looking for besides not-further-confirmed straight-line extrapolations from inductive generalizations supported by evidence?
I am looking for the evidence in “supported by evidence”. I am further trying to figure how you anticipate your beliefs to pay rent, what you anticipate to see if explosive recursive self-improvement is possible, and how that belief could be surprised by data.
If you just say, “I predict we will likely be wiped out by badly done AI.”, how do you expect to update on evidence? What would constitute such evidence?
Okay, I can see how XiXiDu’s post might come across that way. I think I can clarify what I think that XiXiDu is trying to get at by asking some better questions of my own.
What evidence has SIAI presented that the Singularity is near?
If the Singularity is near then why has the scientific community missed this fact?
What evidence has SIAI presented for the existence of grey goo technology?
If grey goo technology is feasible then why has the scientific community missed this fact?
Assuming that the Singularity is near, what evidence is there that SIAI has a chance to lower global catastrophic risk in a nontrivial way?
What evidence is there that SIAI has room for more funding?
“Near”? Where’d we say that? What’s “near”? XiXiDu thinks we’re Kurzweil?
What kind of evidence would you want aside from a demonstrated Singularity?
Grey goo? Huh? What’s that got to do with us? Read Nanosystems by Eric Drexler or Freitas on “global ecophagy”. XiXiDu thinks we’re Foresight?
If this business about “evidence” isn’t a demand for particular proof, then what are you looking for besides not-further-confirmed straight-line extrapolations from inductive generalizations supported by evidence?
You’ve claimed that in your blogging heads divlog with Scott Aaronson that you think that it’s pretty obvious that there will be an AGI within the next century. As far as I know you have not offered a detailed description of the reasoning that led you to this conclusion that can be checked by others.
I see this as significant for the reasons given in my comment here.
I don’t know what the situation is with SIAI’s position on grey goo—I’ve heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed. So let’s forget about about questions 3 and 4.
Questions 1, 2, 5 and 6 remain.
You’ve shifted the question from “is SIAI on balance worth donating to” to “should I believe everything Eliezer has ever said”.
The point is that grey goo is not relevant to SIAI’s mission (apart from being yet another background existential risk that FAI can dissolve). “Scientific community” doesn’t normally professionally study (far) future technological capabilities.
My whole point about grey goo has been, as stated, that a possible superhuman AI could use it to do really bad things. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
I’m shocked how you people misintepreted my intentions there.
If a superhuman AI is possible without advanced nanotechnology, a superhuman AI could just invent advanced nanotechnology and implement it.
Grey goo is only a potential danger in its own right because it’s a way dumb machinery can grow in destructive power (you don’t need to assume AI controlling it for it to be dangerous, at least so goes the story). AGI is not dumb, so it can use something more fitting to precise control than grey goo (and correspondingly more destructive and feasible).
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
As katydee pointed out, if for some strange reason grey goo is what AI would want, AI will invent grey goo. If you used “grey goo” to refer to the rough level of technological development necessary to produce grey goo, then my comments missed that point.
Illusion of transparency. Since the general point about nanotech seems equally wrong to me, I couldn’t distinguish between the error of making it and making a similarly wrong point about the relevance of grey goo in particular. In general, I don’t plot, so take my words literally. If I don’t like something, I just say so, or keep silent.
If it seems equally wrong, why haven’t you pointed me to some further reasoning on the topic regarding the feasibility of AGI without advanced (grey goo level) nanotechnology? Why haven’t you argued about the dangers of AGI which is unable to make use of advanced nanotechnology? I was inquiring about these issues in my original post and not trying to argue against the scenarios in question.
Yes, I’ve seen the comment regarding the possible invention of advanced nanotechnology by AGI. If AGI needs something that isn’t there it will just pull it out of its hat. Well, I have my doubts that even a superhuman AGI can steer the development of advanced nanotechnology so that it can gain control of it. Sure, it might solve the problems associated with it and send the solutions to some researcher. Then it could buy the stocks of the subsequent company involved with the new technology and somehow gain control...well, at this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it.
To the point: if AGI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI. FAI is not about superpowerful robots, it’s about technically understanding what we want, and using that understanding to automate the manufacturing of goodness. The power is expected to come from unbounded automatic goal-directed behavior, something that happens without humans in the system to ever stop the process if it goes wrong.
To the point: if AI can’t pose a danger, because its hands are tied, that’s wonderful! Then we have more time to work of FAI.
Overall I’d feel a lot more comfortable if you just said “there’s a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don’t know whether or not I’m on the right track in focusing on Friendly AI or whether I’m right about when the Singularity will occur, I’m just doing the best that I can.”
This is largely because of the issue that I raise here
I should emphasize that I don’t think that you’d ever knowingly do something that raised existential risk, I think that you’re a kind and noble spirit. But I do think I’m raising a serious issue which you’ve missed.
Edit: See also these comments
I am looking for the evidence in “supported by evidence”. I am further trying to figure how you anticipate your beliefs to pay rent, what you anticipate to see if explosive recursive self-improvement is possible, and how that belief could be surprised by data.
If you just say, “I predict we will likely be wiped out by badly done AI.”, how do you expect to update on evidence? What would constitute such evidence?