The lesswrong/EA communities ‘inside game’ strategy for AI safety is genuinely embarrassing to me as a longtime community member. Quokkas gonna quokka but this is ridiculous.
Without endorsing anything, I can explain the comment.
The “inside strategy” refers to the strategy of safety-conscious EAs working with (and in) the AI capabilities companies like openAI; Scott Alexander has discussed this here. See the “Cooperate / Defect?” section.
The “Quokkas gonna quokka” is a reference to this classic tweet which accuses the rationalists of being infinitely trusting, like the quokka (an animal which has no natural predators on its island and will come up and hug you if you visit). Rationalists as quokkas is a bit of a meme; search “quokka” on this page, for example.
In other words, the argument is that rationalists cannot imagine the AI companies would lie to them, and it’s ridiculous.
I am just now learning the origin of the quokka meme. The first and only time I ever saw the reference was with no explanation when someone posted this meme on Twitter
As one of the people who advocated this strategy early on, I want to point out what I pointed out then: working inside AI capabilities labs probably grants you access to insights about what is going on, how things work, and what is (going to be) important that you wouldn’t necessarily get from outside.
That said, it’s not clear to me that the community’s entire strategy is ‘inside game’. I know at least one community member who is hostile to OpenAI, I guess Pause AI protesting outside OpenAI counts as those of us who are in Pause AI being hostile to it, and I guess you yourself also count as a community member who is hostile to OpenAI? Is MIRI working with OpenAI or other corporate labs?
The lesswrong/EA communities ‘inside game’ strategy for AI safety is genuinely embarrassing to me as a longtime community member. Quokkas gonna quokka but this is ridiculous.
I downvoted this comment because it’s genuinely incomprehensible.
Without endorsing anything, I can explain the comment.
The “inside strategy” refers to the strategy of safety-conscious EAs working with (and in) the AI capabilities companies like openAI; Scott Alexander has discussed this here. See the “Cooperate / Defect?” section.
The “Quokkas gonna quokka” is a reference to this classic tweet which accuses the rationalists of being infinitely trusting, like the quokka (an animal which has no natural predators on its island and will come up and hug you if you visit). Rationalists as quokkas is a bit of a meme; search “quokka” on this page, for example.
In other words, the argument is that rationalists cannot imagine the AI companies would lie to them, and it’s ridiculous.
I am just now learning the origin of the quokka meme. The first and only time I ever saw the reference was with no explanation when someone posted this meme on Twitter
I agree voted given LGS’s explanation. no karma vote because yup that was pretty confusing
As one of the people who advocated this strategy early on, I want to point out what I pointed out then: working inside AI capabilities labs probably grants you access to insights about what is going on, how things work, and what is (going to be) important that you wouldn’t necessarily get from outside.
That said, it’s not clear to me that the community’s entire strategy is ‘inside game’. I know at least one community member who is hostile to OpenAI, I guess Pause AI protesting outside OpenAI counts as those of us who are in Pause AI being hostile to it, and I guess you yourself also count as a community member who is hostile to OpenAI? Is MIRI working with OpenAI or other corporate labs?