The current state of evidence IS NOT sufficient to scare people up to the point of having nightmares
You appear to be suggesting that Eliezer should censor presentation of his thoughts on the subject so as to prevent people from having nightmares. Spot the irony! ;)
and ask them for most of their money.
Eliezer asks people for money. That hardly makes him unique. Neither he nor anyone else is obliged to get your permission before they ask for donations in support of their cause. It seems to me that you expect more from the SIAI than you do from other well meaning organisations simply because there is actually a chance that the cause may make a significant long term difference. As opposed to virtually all the rest—those we know are pointless!
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, “do you have better data than me”? Or, “I have a bunch of good arguments”? If you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I’ll ask you for how you came up with these estimations. I’ll ask you to provide more than a consistent internal logic but some evidence-based prior.
I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.
So take my word for it, I know more than you do, no really I do, and SHUT UP. -- Eliezer Yudkowsky (Reference)
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references, if you believe they give credence to the ideas, so that people see that all you say isn’t made up but based on previous work and evidence by people that are not associated with your organisation.
That quote is out of context. While I do happen to hold Eliezer’s behavior in that context in contempt, the way the quote is presented here is misleading. It is not relevant to your replies and only relevant to the topic here by virtue of Eliezer’s character.
Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans?
This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?
Speak for yourself. I don’t have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.
Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it. The same applies to other readers here. That is not to say that more work producing the documentation of the kind that you describe would not be desirable.
This comment will be downvoted but I hope you people will actually explain yourself and not just click ‘Vote down’, every bot can do that.
Now that I’ve slept I read your comment again and I don’t see any justification for why it got upvoted even once. I never claimed that EY can’t ask for money, you are creating a straw man there. You also do not know what I do expect from other organisations. Further, it is not fallacious to suspect that Yudkowsky has some responsibility if people get nighmares from ideas that he would be able to resolve. If he really believes those things, it is of course his right to proclaim them. But the gist of my comment was meant to inquire about the foundations of those beliefs and stating that it does not appear to me that they are based on evidence which makes it legally right but ethically irresponsible to tell people to worry to such an extent or even not to tell them not to worry.
I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.
I just don’t know how to parse this. I mean what I asked for and I do not ask for certainty here. I’m not doubting evolution and climate change. The problem is that even a randomly picked research paper likely bears more analysis, evidence and references than all of LW and the SIAI’ documents together regarding risks posed by recursive self-improvement from artificial general intelligence.
That quote is out of context.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. But given how the idea of risks from AGI is weighted by him, it is just the cherry on top of marginal issues that do not support the conclusions.
Speak for yourself. I don’t have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.
I don’t have a difficulty to comprehend them either. I’m questioning the propositions, the conclusions drawn and further speculations based on those premises.
Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. [...]
Yudkowsky is definitely a clever fellow. He may not have fancy qualifications—and he is far from infallible—but he is pretty smart.
In the particular post in question, I am pretty sure he was being silly—which is a rather unfortunate time to be claiming superiority.
However, I don’t really know. The stunt created intrigue, mystery, the forbidden, added to the controversy. Overall, Yudkowsky is pretty good at marketing—and maybe this was a taste of it.
I wonder if his Harry Potter fan-fic is marketing—or else how he justifies it.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
If you had restrained your claim in that way (ie. not made the claim that I had quoted in the above context) then I would have agreed with you.
I cannot account for every possible interpretation in what I write in a comment. It is reasonable not to infer oughts from questions. I said:
This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?
That is, if you can’t explain yourself why you hold certain extreme beliefs then how is it rational for me to believe that the credence you place on it is justified? The best response you came up with was telling me that you are able to understand and that you don’t have to force this understanding onto me to believe into it yourself. That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
I though this has been sufficiently clear from what I wrote before.
That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
And it is at this point in the process that an accomplished rationalist says to himself, “I am confused”, and begins to learn.
My impression is that you and Wedrifid are talking past each other. You think that you both are arguing about whether uFAI is a serious existential risk. Wedrifid isn’t even concerned with that. He is concerned with “process questions”—with the analysis of the dialog that you two are conducting, rather than the issue of uFAI risk. And the reason he is being upvoted is because this forum, believe it or not, is a process question forum. It is about rationality, not about AI. Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks. They just have a visceral dislike of rhetorical questions.
If you want to see the standard arguments in favor of the Scary Idea, follow Louie’s advice and read the papers at the SIAI web site. But if you find those arguments unsatisfactory (and I suspect you will) exercise some care if you come looking for a debate on the question here on Less Wrong. Because not everyone who engages with you here will be engaging you on the issue that you want to talk about.
Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.
(Although to be more precise I don’t have a visceral dislike of rhetorical questions per se. It is the use of rhetoric to subvert reason that produces the visceral reaction, not the rhetoric(al question) itself.)
I was too lazy to write this up again, it’s copy and paste work so don’t mind some inconsistencies. Regarding the quotes, I think that EY seriously believes what he says in the given quotes, otherwise I wouldn’t have posted them. I’m not even suggesting that it isn’t true, I actually allow for the possibility that he is that smart. But I want to know what I should do and right now I don’t see any good arguments.
I’m a supporter and donor and what I’m trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I’m trying my best here. This isn’t damaging, this is helpful. Because once you become really popular, people like P.Z. Myers and other much more eloquent and popular people will pull you to pieces if you can’t even respond to my poor attempt at being a devils advocate.
I don’t have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.
I don’t even know where to start here, so I won’t. But I haven’t come across anything yet that I had trouble understanding.
I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.
See that women with red hair? Well, the cleric told me that he believes that she’s a witch. But he’ll update on evidence if the fire didn’t consume her. I said red hair is insufficient data to support that hypothesis and take such extreme measures to test it. He told me that if he came up with more evidence like sorcery I’d just go ahead and find new rhetorical demands.
You appear to be suggesting that Eliezer should censor presentation of his thoughts on the subject so as to prevent people from having nightmares. Spot the irony! ;)
I’m not against free speech and religious freedom but that also applies for my own thoughts on the subject. I believe he could do much more than censoring certain ideas, namely show that they are bogus.
I believe he could do much more than censoring certain ideas, namely show that they are bogus.
I’m not a big fan of Eliezer, but that complaint strikes me as completely unfair. There is far less censorship here than at a typical moderated blog. And EY does expend some effort showing that various ideas are bogus.
I’m not an insider, or even old-timer, but I have reason to believe that the one single forbidden subject here is censored not because it is believed to be valid or bogus, nor because it casts a bad light on EY and SIAI, but rather because discussing it does no good and may do some harm—something a bit like a ban on certain kinds of racist offensive speech, but different.
And in any case, the “forbidden idea” can always be discussed elsewhere, assuming you can even find anyone that can become interested in the idea elsewhere. The reach of EY’s “censorship” is very limited.
He told me that if he came up with more evidence like sorcery I’d just go ahead and find new rhetorical demands.
[See context for implied meaning if the excerpt isn’t clear]. I claimed approximately the same thing that you say yourself below.
I’m a supporter and donor and what I’m trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I’m trying my best here.
I’ve got nothing against the Devil, it’s the Advocacy that is mostly bullshit. Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments. That would be an insult to the Devil!
I don’t even know where to start here, so I won’t. But I haven’t come across anything yet that I had trouble understanding.
You conveyed most of your argument via rhetorical questions. To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer). I believe I quoted an example in the context.
Making an assertion into a question does not give a license to say whatever you want with no risk of direct contradiction. (Even though that is how the tactic is used in practice.)
More concise answer: Then don’t ask stupid questions!
To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer).
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI
I think this is true.
. That is
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
This is what discussions with SIAI people on the Scary Idea almost always come down to!
The prototypical dialogue goes like this.
SIAI Guy: If you make a human-level AGI using OpenCog, without a provably Friendly design, it will almost surely kill us all.
Ben: Why?
SIAI Guy: The argument is really complex, but if you read Less Wrong you should understand it
Ben: I read the Less Wrong blog posts. Isn’t there somewhere that the argument is presented formally and systematically?
SIAI Guy: No. It’s really complex, and nobody in-the-know had time to really spell it out like that.
But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity.
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
Could I ask you to post the quotes as a separate post? They are priceless (and I’d love to be able to see what they applied to—so please include the references as well).
I should add, don’t get a wrong impression from those quotes. I still believe he might actually be that smart. He’s at least the smartest person I know of by what I’ve read. Except when it comes to public relations. You shouldn’t say those things if you do not explain yourself sufficiently at the same time.
If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky. -- Eliezer Yudkowsky Facts
Here some stuff EY uttered for real:
People don’t know these things until I explain them! (Reference)
You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. (Reference)
So take my word for it, I know more than you do, no really I do, and SHUT UP. (Reference)
The first two, well the context is there, just click ‘Parent’. The third is from something that has now been deleted. I can’t go into detail but can send you a PM if you want.
You appear to be suggesting that Eliezer should censor presentation of his thoughts on the subject so as to prevent people from having nightmares. Spot the irony! ;)
Eliezer asks people for money. That hardly makes him unique. Neither he nor anyone else is obliged to get your permission before they ask for donations in support of their cause. It seems to me that you expect more from the SIAI than you do from other well meaning organisations simply because there is actually a chance that the cause may make a significant long term difference. As opposed to virtually all the rest—those we know are pointless!
I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.
That quote is out of context. While I do happen to hold Eliezer’s behavior in that context in contempt, the way the quote is presented here is misleading. It is not relevant to your replies and only relevant to the topic here by virtue of Eliezer’s character.
Speak for yourself. I don’t have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.
Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it. The same applies to other readers here. That is not to say that more work producing the documentation of the kind that you describe would not be desirable.
This comment will be downvoted but I hope you people will actually explain yourself and not just click ‘Vote down’, every bot can do that.
Now that I’ve slept I read your comment again and I don’t see any justification for why it got upvoted even once. I never claimed that EY can’t ask for money, you are creating a straw man there. You also do not know what I do expect from other organisations. Further, it is not fallacious to suspect that Yudkowsky has some responsibility if people get nighmares from ideas that he would be able to resolve. If he really believes those things, it is of course his right to proclaim them. But the gist of my comment was meant to inquire about the foundations of those beliefs and stating that it does not appear to me that they are based on evidence which makes it legally right but ethically irresponsible to tell people to worry to such an extent or even not to tell them not to worry.
I just don’t know how to parse this. I mean what I asked for and I do not ask for certainty here. I’m not doubting evolution and climate change. The problem is that even a randomly picked research paper likely bears more analysis, evidence and references than all of LW and the SIAI’ documents together regarding risks posed by recursive self-improvement from artificial general intelligence.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. But given how the idea of risks from AGI is weighted by him, it is just the cherry on top of marginal issues that do not support the conclusions.
I don’t have a difficulty to comprehend them either. I’m questioning the propositions, the conclusions drawn and further speculations based on those premises.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
Yudkowsky is definitely a clever fellow. He may not have fancy qualifications—and he is far from infallible—but he is pretty smart.
In the particular post in question, I am pretty sure he was being silly—which is a rather unfortunate time to be claiming superiority.
However, I don’t really know. The stunt created intrigue, mystery, the forbidden, added to the controversy. Overall, Yudkowsky is pretty good at marketing—and maybe this was a taste of it.
I wonder if his Harry Potter fan-fic is marketing—or else how he justifies it.
If you had restrained your claim in that way (ie. not made the claim that I had quoted in the above context) then I would have agreed with you.
I cannot account for every possible interpretation in what I write in a comment. It is reasonable not to infer oughts from questions. I said:
That is, if you can’t explain yourself why you hold certain extreme beliefs then how is it rational for me to believe that the credence you place on it is justified? The best response you came up with was telling me that you are able to understand and that you don’t have to force this understanding onto me to believe into it yourself. That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
I though this has been sufficiently clear from what I wrote before.
And it is at this point in the process that an accomplished rationalist says to himself, “I am confused”, and begins to learn.
My impression is that you and Wedrifid are talking past each other. You think that you both are arguing about whether uFAI is a serious existential risk. Wedrifid isn’t even concerned with that. He is concerned with “process questions”—with the analysis of the dialog that you two are conducting, rather than the issue of uFAI risk. And the reason he is being upvoted is because this forum, believe it or not, is a process question forum. It is about rationality, not about AI. Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks. They just have a visceral dislike of rhetorical questions.
If you want to see the standard arguments in favor of the Scary Idea, follow Louie’s advice and read the papers at the SIAI web site. But if you find those arguments unsatisfactory (and I suspect you will) exercise some care if you come looking for a debate on the question here on Less Wrong. Because not everyone who engages with you here will be engaging you on the issue that you want to talk about.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.
Well said.
(Although to be more precise I don’t have a visceral dislike of rhetorical questions per se. It is the use of rhetoric to subvert reason that produces the visceral reaction, not the rhetoric(al question) itself.)
I was too lazy to write this up again, it’s copy and paste work so don’t mind some inconsistencies. Regarding the quotes, I think that EY seriously believes what he says in the given quotes, otherwise I wouldn’t have posted them. I’m not even suggesting that it isn’t true, I actually allow for the possibility that he is that smart. But I want to know what I should do and right now I don’t see any good arguments.
I’m a supporter and donor and what I’m trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I’m trying my best here. This isn’t damaging, this is helpful. Because once you become really popular, people like P.Z. Myers and other much more eloquent and popular people will pull you to pieces if you can’t even respond to my poor attempt at being a devils advocate.
I don’t even know where to start here, so I won’t. But I haven’t come across anything yet that I had trouble understanding.
See that women with red hair? Well, the cleric told me that he believes that she’s a witch. But he’ll update on evidence if the fire didn’t consume her. I said red hair is insufficient data to support that hypothesis and take such extreme measures to test it. He told me that if he came up with more evidence like sorcery I’d just go ahead and find new rhetorical demands.
I’m not against free speech and religious freedom but that also applies for my own thoughts on the subject. I believe he could do much more than censoring certain ideas, namely show that they are bogus.
I’m not a big fan of Eliezer, but that complaint strikes me as completely unfair. There is far less censorship here than at a typical moderated blog. And EY does expend some effort showing that various ideas are bogus.
I’m not an insider, or even old-timer, but I have reason to believe that the one single forbidden subject here is censored not because it is believed to be valid or bogus, nor because it casts a bad light on EY and SIAI, but rather because discussing it does no good and may do some harm—something a bit like a ban on certain kinds of racist offensive speech, but different.
And in any case, the “forbidden idea” can always be discussed elsewhere, assuming you can even find anyone that can become interested in the idea elsewhere. The reach of EY’s “censorship” is very limited.
[See context for implied meaning if the excerpt isn’t clear]. I claimed approximately the same thing that you say yourself below.
I’ve got nothing against the Devil, it’s the Advocacy that is mostly bullshit. Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments. That would be an insult to the Devil!
You conveyed most of your argument via rhetorical questions. To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer). I believe I quoted an example in the context.
Making an assertion into a question does not give a license to say whatever you want with no risk of direct contradiction. (Even though that is how the tactic is used in practice.)
More concise answer: Then don’t ask stupid questions!
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
From where you got those quotes? References?
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Whoops, I’m sorry, never mind.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I think this is true.
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
Indeed, what an irony...
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
My argument is fairly simple -
If humans found it sufficiently useful to wipe chimpanzees off the face of the earth, we could and would do so.
The level of AI I’m discussing is at least as much smarter than us as we are of chimpanzees.
Updated it without the quotes now so people don’t get unnecessary distracted.
Could I ask you to post the quotes as a separate post? They are priceless (and I’d love to be able to see what they applied to—so please include the references as well).
I should add, don’t get a wrong impression from those quotes. I still believe he might actually be that smart. He’s at least the smartest person I know of by what I’ve read. Except when it comes to public relations. You shouldn’t say those things if you do not explain yourself sufficiently at the same time.
Here some stuff EY uttered for real:
People don’t know these things until I explain them! (Reference)
You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. (Reference)
So take my word for it, I know more than you do, no really I do, and SHUT UP. (Reference)
The first two, well the context is there, just click ‘Parent’. The third is from something that has now been deleted. I can’t go into detail but can send you a PM if you want.
Now I’m curious what they were, and where they came from. Distract me, but in a sub-thread.