What stronger points are you referring to? It seems to me XiXiDu’s post has only 2 points, both of which Eliezer addressed:
“Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.”
His smart friends/favorite SF writers/other AI researchers/other Bayesians don’t support SIAI.
My point is that your evidence has to stand up to whatever estimations you come up with. My point is the missing transparency in your decision making regarding the possibility of danger posed by superhuman AI. My point is that any form of external peer review is missing and that therefore I either have to believe you or learn enough to judge all of your claims myself after reading hundreds of posts and thousands of documents to find some pieces of evidence hidden beneath. My point is that competition is necessary, that not just the SIAI should work on the relevant problems. There are many other points you seem to be missing entirely.
That one’s easy: We’re doing complex multi-step extrapolations argued to be from inductive generalizations themselves supported by the evidence, which can’t be expected to come with experimental confirmation of the “Yes, we built an unFriendly AI and it went foom and destroyed the world” sort. This sort of thing is dangerous, but a lot of our predictions are really antipredictions and so the negations of the claims are even more questionable once you examine them.
If you have nothing valuable to say, why don’t you stay away from commenting at all? Otherwise you could simply ask me what I meant to say, if something isn’t clear. But those empty statements coming from you recently make me question if you’ve been the person that I thought you are. You cannot even guess what I am trying to ask here? Oh come on...
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
What stronger points are you referring to? It seems to me XiXiDu’s post has only 2 points, both of which Eliezer addressed:
“Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.”
His smart friends/favorite SF writers/other AI researchers/other Bayesians don’t support SIAI.
My point is that your evidence has to stand up to whatever estimations you come up with. My point is the missing transparency in your decision making regarding the possibility of danger posed by superhuman AI. My point is that any form of external peer review is missing and that therefore I either have to believe you or learn enough to judge all of your claims myself after reading hundreds of posts and thousands of documents to find some pieces of evidence hidden beneath. My point is that competition is necessary, that not just the SIAI should work on the relevant problems. There are many other points you seem to be missing entirely.
“Is the SIAI evidence-based, or merely following a certain philosophy?”
Oh, is that the substantive point? How the heck was I supposed to know you were singling that out?
That one’s easy: We’re doing complex multi-step extrapolations argued to be from inductive generalizations themselves supported by the evidence, which can’t be expected to come with experimental confirmation of the “Yes, we built an unFriendly AI and it went foom and destroyed the world” sort. This sort of thing is dangerous, but a lot of our predictions are really antipredictions and so the negations of the claims are even more questionable once you examine them.
If you have nothing valuable to say, why don’t you stay away from commenting at all? Otherwise you could simply ask me what I meant to say, if something isn’t clear. But those empty statements coming from you recently make me question if you’ve been the person that I thought you are. You cannot even guess what I am trying to ask here? Oh come on...
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
What you say here makes sense, sorry for not being more clear earlier. See my list of questions in my response to another one of your comments.
How was Eliezer supposed to answer that, given that XiXiDu stated that he didn’t have enough background knowledge to evaluate what’s already on LW?