You don’t want to see comments asking about the possible repercussions of certain forms of language?
I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer’s and other’s arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:
Couldn’t this be communicated in a way that will not sound insane to the Normals?
Couldn’t this be communicated in a way that will not sound insane?
This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn’t seem to contribute anything.
I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.
It was meant to be a way of politely asking about things such as:
Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?
How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?
Shouldn’t someone be worrying explicitly about public image (and if there is, what are they doing about it)?
Etc.
Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?
You don’t want to see comments asking about the possible repercussions of certain forms of language?
I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer’s and other’s arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:
Couldn’t this be communicated in a way that will not sound insane to the Normals?
This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn’t seem to contribute anything.
Relevant links: Absurdity heuristic, Illusion of transparency.
I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.
It was meant to be a way of politely asking about things such as:
Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?
How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?
Shouldn’t someone be worrying explicitly about public image (and if there is, what are they doing about it)?
Etc.
Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?