While I agree with everything that Eliezer is saying (in the videos up to #5. I have not yet watched the remaining 25 videos yet), I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.
For instance, he, rightly, makes the claim that this point in history is crunch time for our species (although I have some specific questions about the specific consequences he believes might befall us if we fail), and for the inter-galactic civilization to which we will eventually give birth.
Now, I completely understand what he is saying here.
But, Joe Sixpack is going to think us a bunch of lunatics to be worrying about things like AI (whether it is friendly or not), and other existential risks to life, when he needs to pay less taxes so that he can employ another four workers. Never mind that Joe Sixpack is about the most irrational man on earth, he votes for other, equally irrational men, who eventually get in the way of our goals by marginalizing us due to statements about “the Intergalactic civilization which we will eventually be responsible for.”
It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and “kill all humans” (To quote Bender).
Well… OK Then. I think my whole point was that you/we/the Singularity movement in general, needs to be prepared for an eventual use of quotes taken out of context. To be prepared for it.
I have no problem with outlining eventual goals, and their reasoning, even if it sounds insane (to an uneducated listener), yet it would be a good idea to have the groundwork prepared for such an eventuality. I was hoping that such groundwork was on someone’s mind, is this the case?
That’s why FAQs and About pages and such should be written with newcomers in mind, and address the “Yes it sounds crazy, but here’s why it might not be” question that they will first ask.
I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.
It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and “kill all humans” (To quote Bender).
I’m actually more worried about very high status reasonably intelligent individuals in positions of power, who will use out of context quotes to preserve their self-image of being a good and moral persons, by refusing to re-evaluate priorities because that would violate their tribal identity and their rationale for why they have so far “deserved” all the high status that they have.
Imagine a supreme court judge, in fact imagine the outlier who is closest to the ideal from the current set of all judges ever, the best possible judge that could stumble into the position by currently existing social structures, trying to decide if something related to the FAI project is legal or not.
I am curious as to why the above comment was down-voted. I do not understand what was either irrational or possibly offensive to anyone within the comment.
I downvoted the comment for stating the overly obvious: not because it makes any particular mistake, but to signal that I don’t want many comments like this to appear. Correspondingly, it’s a weak signal, and typically one should wait several hours for the rating disagreement on comments to settle, for example your comment is likely to be voted up again if someone thinks it is a kind of comment that shouldn’t be discouraged.
You don’t want to see comments asking about the possible repercussions of certain forms of language?
I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer’s and other’s arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:
Couldn’t this be communicated in a way that will not sound insane to the Normals?
Couldn’t this be communicated in a way that will not sound insane?
This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn’t seem to contribute anything.
I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.
It was meant to be a way of politely asking about things such as:
Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?
How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?
Shouldn’t someone be worrying explicitly about public image (and if there is, what are they doing about it)?
Etc.
Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?
Just a quick question here...
While I agree with everything that Eliezer is saying (in the videos up to #5. I have not yet watched the remaining 25 videos yet), I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.
For instance, he, rightly, makes the claim that this point in history is crunch time for our species (although I have some specific questions about the specific consequences he believes might befall us if we fail), and for the inter-galactic civilization to which we will eventually give birth.
Now, I completely understand what he is saying here.
But, Joe Sixpack is going to think us a bunch of lunatics to be worrying about things like AI (whether it is friendly or not), and other existential risks to life, when he needs to pay less taxes so that he can employ another four workers. Never mind that Joe Sixpack is about the most irrational man on earth, he votes for other, equally irrational men, who eventually get in the way of our goals by marginalizing us due to statements about “the Intergalactic civilization which we will eventually be responsible for.”
It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and “kill all humans” (To quote Bender).
People who want to quote me out of context already have plenty of ammunition. I say screw it.
Well… OK Then. I think my whole point was that you/we/the Singularity movement in general, needs to be prepared for an eventual use of quotes taken out of context. To be prepared for it.
I have no problem with outlining eventual goals, and their reasoning, even if it sounds insane (to an uneducated listener), yet it would be a good idea to have the groundwork prepared for such an eventuality. I was hoping that such groundwork was on someone’s mind, is this the case?
I think you’re right to point out how crazy this seems to outsiders. This website reads like nonsense to most people.
That’s why FAQs and About pages and such should be written with newcomers in mind, and address the “Yes it sounds crazy, but here’s why it might not be” question that they will first ask.
I’m actually more worried about very high status reasonably intelligent individuals in positions of power, who will use out of context quotes to preserve their self-image of being a good and moral persons, by refusing to re-evaluate priorities because that would violate their tribal identity and their rationale for why they have so far “deserved” all the high status that they have.
Imagine a supreme court judge, in fact imagine the outlier who is closest to the ideal from the current set of all judges ever, the best possible judge that could stumble into the position by currently existing social structures, trying to decide if something related to the FAI project is legal or not.
Frankly, that scares the s**t out of me.
I am curious as to why the above comment was down-voted. I do not understand what was either irrational or possibly offensive to anyone within the comment.
I downvoted the comment for stating the overly obvious: not because it makes any particular mistake, but to signal that I don’t want many comments like this to appear. Correspondingly, it’s a weak signal, and typically one should wait several hours for the rating disagreement on comments to settle, for example your comment is likely to be voted up again if someone thinks it is a kind of comment that shouldn’t be discouraged.
You don’t want to see comments asking about the possible repercussions of certain forms of language?
I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer’s and other’s arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:
Couldn’t this be communicated in a way that will not sound insane to the Normals?
This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn’t seem to contribute anything.
Relevant links: Absurdity heuristic, Illusion of transparency.
I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.
It was meant to be a way of politely asking about things such as:
Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?
How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?
Shouldn’t someone be worrying explicitly about public image (and if there is, what are they doing about it)?
Etc.
Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?
Maybe Searle’s a lurker? I think the pranks are the problem (ETA: nope), although I personally find them hilarious.
I think that the Searle comment was on a different thread, which shouldn’t have any bearing on this one.
And, looking back… I can see why someone may have objected.
Dur, I’m an idiot.