These are reasonable questions to ask. Here are my thoughts:
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).
Virtually certain that these things are possible in our physics. It’s possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we’re sure chimps couldn’t program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it’s hard to imagine that recursive improvement would cap out any time soon. At some point we’ll have a descendant who can figure out self-improving AI; it’s just a question of when.
The likelihood of exponential growth versus a slow development over many centuries.
That it is worth it to spend most on a future whose likelihood I cannot judge.
These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn’t actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash. If it is possible soon, then it’s a vital factor in existential risk. You’d have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.
For the other, this falls under the fuzzies and utilons calculation. Insofar as you want to feel confident that you’re helping the world (and yes, any human altruist does want this), pick a charity certain to do good in the present. Insofar as you actually want to maximize your expected impact, you should weight charities by their uncertainty and their impact, multiply it out, and put all your eggs in the best basket (unless you’ve just doubled a charity’s funds and made them less marginally efficient than the next one on your list, but that’s rare).
That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
Aside from any considerations in his favor (development of TDT, for one publicly visible example), this sounds too much like a price for joining— if your really take the risk of Unfriendly AI seriously, what else could you do about it? In fact, the more well-known SIAI gets in the AI community and the more people take it seriously, the more likely that it will (1) instill in other GAI researchers some necessary concern for goal systems and (2) give rise to competing Friendly AI projects which might improve on SIAI in any relevant respects. Unless you thought they were doing as much harm as good, it still seems optimal to fund SIAI now if you’re concerned about self-improving AI.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out?
My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI rather than doing other things with their life. There’s a very unsurprising selection bias here.
ETA: Reading the comments, I just found that XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate. I’ve downvoted this post, and I now feel kind of stupid for having written out this huge reply.
It’s possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we’re sure chimps couldn’t program trans-simian AI.
Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?
...when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading)...
At some point we’ll have a descendant who can figure out self-improving AI; it’s just a question of when.
Yes, once they turned themselves into superhuman intelligences? Isn’t this what Kurzweil believes? No risks by superhuman AI because we’ll go the same way anyway?
If a self-improving AI isn’t actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash.
Yep.
You’d have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.
Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.
Insofar as you actually want to maximize your expected impact...
I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.
...development of TDT...
Highly interesting. Sadly it is not a priority.
...if your really take the risk of Unfriendly AI seriously, what else could you do about it?
I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there’s a lot more you smart people could do besides supporting EY.
...the more well-known SIAI gets in the AI community.
The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that’s the same in the AI community.
Unless you thought they were doing as much harm as good...
That might very well be the case given how they handle public relations.
My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI.
He wasn’t the first smart person who came to these conclusions. And he sure isn’t charismatic.
XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate.
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
And if you feel stupid because I haven’t read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.
Since I’ve now posted several comments on this thread defending and/or “siding with” XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Although there are a couple pieces of the SIAI thesis that I’m not yet 100% sold on, I don’t reject it in its entirety, as it now sounds like XiXiDu does—I just want to hear some more thorough explanation on a couple of sticking points before I buy in.
I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he’s not neurotypical likely isn’t a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.
Now let’s examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov’ 26th birthday. I wrote:
Happy birthday!
I’m also 26…I’ll need another 26 years to reach your level though :-)
I’ll donate to SIAI again as soon as I can.
And keep up this great blog.
Have fun!!!
Let’s examine my opinion about Eliezer Yudkowsky.
Here I suggest EY to be the most admirable person.
When I recommended reading Good and Real to a professional philosopher I wrote, “Don’t know of a review, a recommendation by Eliezer Yudkowsky as ‘great’ is more than enough for me right now.”
Here a long discussion with some physicists in which I try to defend MWI by linking them to EY’ writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.
There is a lot more which I’m too lazy to look up now. You can check it for yourself, I’m promoting EY and the SIAI all the time, everywhere.
And I’m pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.
...and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:
Wow, that’s an incredibly arrogant put-down by Eliezer..SIAI won’t win many friends if he puts things like that...
and
...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments?
I have been pointing that out as well—although I would describe his reactions more as “defensive” than “antagonistic”. Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I’m not aware of?
These are reasonable questions to ask. Here are my thoughts:
Virtually certain that these things are possible in our physics. It’s possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we’re sure chimps couldn’t program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it’s hard to imagine that recursive improvement would cap out any time soon. At some point we’ll have a descendant who can figure out self-improving AI; it’s just a question of when.
These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn’t actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash. If it is possible soon, then it’s a vital factor in existential risk. You’d have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.
For the other, this falls under the fuzzies and utilons calculation. Insofar as you want to feel confident that you’re helping the world (and yes, any human altruist does want this), pick a charity certain to do good in the present. Insofar as you actually want to maximize your expected impact, you should weight charities by their uncertainty and their impact, multiply it out, and put all your eggs in the best basket (unless you’ve just doubled a charity’s funds and made them less marginally efficient than the next one on your list, but that’s rare).
Aside from any considerations in his favor (development of TDT, for one publicly visible example), this sounds too much like a price for joining— if your really take the risk of Unfriendly AI seriously, what else could you do about it? In fact, the more well-known SIAI gets in the AI community and the more people take it seriously, the more likely that it will (1) instill in other GAI researchers some necessary concern for goal systems and (2) give rise to competing Friendly AI projects which might improve on SIAI in any relevant respects. Unless you thought they were doing as much harm as good, it still seems optimal to fund SIAI now if you’re concerned about self-improving AI.
My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI rather than doing other things with their life. There’s a very unsurprising selection bias here.
ETA: Reading the comments, I just found that XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate. I’ve downvoted this post, and I now feel kind of stupid for having written out this huge reply.
What do you mean by plausible in this instance? Not currently refuted by our theories of intelligence or chemistry? Or something stronger.
Oh yeah, oops, I meant to say “possible in our physics”. Edited accordingly.
Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?
Not even your master believes this.
Yes, once they turned themselves into superhuman intelligences? Isn’t this what Kurzweil believes? No risks by superhuman AI because we’ll go the same way anyway?
Yep.
Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.
I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.
Highly interesting. Sadly it is not a priority.
I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there’s a lot more you smart people could do besides supporting EY.
The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that’s the same in the AI community.
That might very well be the case given how they handle public relations.
He wasn’t the first smart person who came to these conclusions. And he sure isn’t charismatic.
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
And if you feel stupid because I haven’t read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.
Since I’ve now posted several comments on this thread defending and/or “siding with” XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Although there are a couple pieces of the SIAI thesis that I’m not yet 100% sold on, I don’t reject it in its entirety, as it now sounds like XiXiDu does—I just want to hear some more thorough explanation on a couple of sticking points before I buy in.
Also, charisma is in the eye of the beholder ;)
I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he’s not neurotypical likely isn’t a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.
Now let’s examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov’ 26th birthday. I wrote:
Let’s examine my opinion about Eliezer Yudkowsky.
Here I suggest EY to be the most admirable person.
When I recommended reading Good and Real to a professional philosopher I wrote, “Don’t know of a review, a recommendation by Eliezer Yudkowsky as ‘great’ is more than enough for me right now.”
Here a long discussion with some physicists in which I try to defend MWI by linking them to EY’ writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.
There is a lot more which I’m too lazy to look up now. You can check it for yourself, I’m promoting EY and the SIAI all the time, everywhere.
And I’m pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:
and
I have been pointing that out as well—although I would describe his reactions more as “defensive” than “antagonistic”. Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I’m not aware of?