That’s precisely the point I’m trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.
On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to “promiscuous investigators,” people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a “promiscuous investigator.”)
My social intuitions tell me it is generally a bad idea to say words like ‘kill’ (as opposed to, say, ‘overwrite’, ‘fatally reorganize’, or ‘dismantle for spare part(icle)s’) in describing scenarios like that, as they resemble some people’s misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the “take apart for raw materials” scenario, people dying is the UFAI’s intended outcome and not just a side effect, “kill” seems an accurate word.
Yes, it is true. I would avoid ‘overwrite’ or ‘fatally reorganize’ because people might not get the idea. Better to go with “rip you apart and re-use your constituent atoms for something else”.
I don’t expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs
That’s precisely the point I’m trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.
On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to “promiscuous investigators,” people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a “promiscuous investigator.”)
I hereby nominate this for understatement of the millennium:
If true, it will eventually be accepted by the academia. Ironically enough, there will be no academia in the present sense anymore.
Does a uFAI killing all of our scientists count as them “accepting” the idea? Rhetorical question.
My social intuitions tell me it is generally a bad idea to say words like ‘kill’ (as opposed to, say, ‘overwrite’, ‘fatally reorganize’, or ‘dismantle for spare part(icle)s’) in describing scenarios like that, as they resemble some people’s misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the “take apart for raw materials” scenario, people dying is the UFAI’s intended outcome and not just a side effect, “kill” seems an accurate word.
Yes, it is true. I would avoid ‘overwrite’ or ‘fatally reorganize’ because people might not get the idea. Better to go with “rip you apart and re-use your constituent atoms for something else”.
I like to use the word “eat”; it’s short, evocative, and basically accurate. We are edible.
I want a uFAI lolcat that says “I can has ur constituent atomz?” and maybe a “nom nom nom” next to an Earth-sized paper clip.
I’d never thought about that, but it sounds very likely, and deserves to be pointed out in more than just this comment.
I don’t expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs
A childish assumption.
Come on, where else could I possibly get my postbeer?