It’s possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we’re sure chimps couldn’t program trans-simian AI.
Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?
...when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading)...
At some point we’ll have a descendant who can figure out self-improving AI; it’s just a question of when.
Yes, once they turned themselves into superhuman intelligences? Isn’t this what Kurzweil believes? No risks by superhuman AI because we’ll go the same way anyway?
If a self-improving AI isn’t actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash.
Yep.
You’d have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.
Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.
Insofar as you actually want to maximize your expected impact...
I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.
...development of TDT...
Highly interesting. Sadly it is not a priority.
...if your really take the risk of Unfriendly AI seriously, what else could you do about it?
I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there’s a lot more you smart people could do besides supporting EY.
...the more well-known SIAI gets in the AI community.
The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that’s the same in the AI community.
Unless you thought they were doing as much harm as good...
That might very well be the case given how they handle public relations.
My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI.
He wasn’t the first smart person who came to these conclusions. And he sure isn’t charismatic.
XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate.
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
And if you feel stupid because I haven’t read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.
Since I’ve now posted several comments on this thread defending and/or “siding with” XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Although there are a couple pieces of the SIAI thesis that I’m not yet 100% sold on, I don’t reject it in its entirety, as it now sounds like XiXiDu does—I just want to hear some more thorough explanation on a couple of sticking points before I buy in.
I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he’s not neurotypical likely isn’t a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.
Now let’s examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov’ 26th birthday. I wrote:
Happy birthday!
I’m also 26…I’ll need another 26 years to reach your level though :-)
I’ll donate to SIAI again as soon as I can.
And keep up this great blog.
Have fun!!!
Let’s examine my opinion about Eliezer Yudkowsky.
Here I suggest EY to be the most admirable person.
When I recommended reading Good and Real to a professional philosopher I wrote, “Don’t know of a review, a recommendation by Eliezer Yudkowsky as ‘great’ is more than enough for me right now.”
Here a long discussion with some physicists in which I try to defend MWI by linking them to EY’ writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.
There is a lot more which I’m too lazy to look up now. You can check it for yourself, I’m promoting EY and the SIAI all the time, everywhere.
And I’m pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.
...and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:
Wow, that’s an incredibly arrogant put-down by Eliezer..SIAI won’t win many friends if he puts things like that...
and
...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments?
I have been pointing that out as well—although I would describe his reactions more as “defensive” than “antagonistic”. Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I’m not aware of?
Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?
Not even your master believes this.
Yes, once they turned themselves into superhuman intelligences? Isn’t this what Kurzweil believes? No risks by superhuman AI because we’ll go the same way anyway?
Yep.
Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.
I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.
Highly interesting. Sadly it is not a priority.
I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there’s a lot more you smart people could do besides supporting EY.
The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that’s the same in the AI community.
That might very well be the case given how they handle public relations.
He wasn’t the first smart person who came to these conclusions. And he sure isn’t charismatic.
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
And if you feel stupid because I haven’t read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.
Since I’ve now posted several comments on this thread defending and/or “siding with” XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don’t want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Although there are a couple pieces of the SIAI thesis that I’m not yet 100% sold on, I don’t reject it in its entirety, as it now sounds like XiXiDu does—I just want to hear some more thorough explanation on a couple of sticking points before I buy in.
Also, charisma is in the eye of the beholder ;)
I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he’s not neurotypical likely isn’t a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.
Now let’s examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov’ 26th birthday. I wrote:
Let’s examine my opinion about Eliezer Yudkowsky.
Here I suggest EY to be the most admirable person.
When I recommended reading Good and Real to a professional philosopher I wrote, “Don’t know of a review, a recommendation by Eliezer Yudkowsky as ‘great’ is more than enough for me right now.”
Here a long discussion with some physicists in which I try to defend MWI by linking them to EY’ writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.
There is a lot more which I’m too lazy to look up now. You can check it for yourself, I’m promoting EY and the SIAI all the time, everywhere.
And I’m pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:
and
I have been pointing that out as well—although I would describe his reactions more as “defensive” than “antagonistic”. Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I’m not aware of?