I wonder if Yud would be willing to write a rat fic aimed at Chinese audiences? He seems to read a bunch of Xinxia, so he’s probably absorbed some of the memes of China’s male youth. Maybe a fanfic of “Oh my god! Earthlings are insane!” would be a good choice, based on my impression of the novel’s themes and what its readership is like.
EDIT: I think the rationality angle is important for making progress on AI safety, but I’m not sure which parts are necessary. Also, what part of HPMOR would make it especially bade for Chinese audiences? The libertarian sympathies? The trans-humanism doesn’t seem like it would be that harmful, given the popularity of novels Embers ad infinatum. Which is another novel that Yud could write a fanfic for.
The most common response I get when I talked to coworkers about AI risk wasn’t denial or an attempt to minimize the problem. It was generally something like “That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.” And then a shrug before they went back to their tasks. I don’t see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.
I can’t really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It’s not something you can argue people out of. Just trust me on this one.
Is it “insanely cringe” for different reasons than it is “insanely cringe” for English audiences? I suspect most Americans, if exposed to it, would describe it as cringe. There is much about it that is cringe, and I say this with some love.
“That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.”
The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money!
The most common response I get when I talked to coworkers about AI risk wasn’t denial or an attempt to minimize the problem. It was generally something like “That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.”
If that’s true I would assume that the people who work on creating the AI guidelines, understand the problem. This would in turn suggests that they take reasonable steps to address it.
Is your model that the people writing the guidelines would be well-intentioned but lack the political power to actually enforce useful guidelines?
The most common response I get when I talked to coworkers about AI risk wasn’t denial or an attempt to minimize the problem. It was generally something like “That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.” And then a shrug before they went back to their tasks. I don’t see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.
Yeah, but a lot of people can say that w/o producing good work. There’s been a number of complaints about field building attempts bringing in low quality people who don’t make progress on the core problem. Now, I am NOT sayng you need to read the sequences and be a member of our cult to make progress. But the models in there do seem important to seeing what is hard about alignment. Now, many smart people have these models themselves, drawing from the same sources Yudkowsky did. But many smart people don’t have these models and bounce off of alignment.
I can’t really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It’s not something you can argue people out of. Just trust me on this one.
I can sort of trust you that HPMOR is insanely cringe. I’m still not sure if a variant wouldn’t work, because I don’t have your model. Maybe I’ll talk to some Chinese friends about this and get their opinion. You may be living in a bubble and not realize it. It happens to everyone at some point, and China must have a lot of bubbles.
I can sort of trust you that HPMOR is insanely cringe.
The private sentiment of folks who read through all of it would probably be some degree of ‘cringe’ too.
I couldn’t even make it halfway, though I am fascinated by imaginative fanfiction, as it becomes too much of a childish power fantasy to ignore and suspend my disbelief while reading.
Yeah, you’ve got a point there. And yet, HPMOR is popular. Lots of people love it, and got into the LW and the rat community that way. You yourself may not have, but that’s evidence in favour of high variance. So I remain unsure if something like HPMOR could work in China too. Why assume there’d be less variance in response there?
It’s about trade-offs. HPMOR/an equally cringey analogue will attract a certain sector of weird people into the community who can then be redirected towards A.I. stuff — but it will repel a majority of novices because it “taints” the A.I. stuff with cringiness by association.
This is a reasonable trade-off if:
the kind of weird people who’ll get into HPMOR are also the kind of weird people who’d be useful to A.I. safety;
the normies were already likely to dismiss the A.I. stuff with or without the added load of cringe.
In the West, 1. is true because there’s a strong association between techy people and niche fandom, so even though weird nerds are a minority, they might represent a substantial fraction of the people you want to reach. And 2. is kind of true for a related reason, which is that “nerds” are viewed as generally cringe even if they don’t specifically talk about HP fanfiction; it’s already assumed that someone who thinks about computers all days is probably the kind of cringe who’d be big into a semi-self-insert HP fanfiction.
But in China, from @Lao Mein’s testimony, 1. is definitely not true (a lot of the people we want to reach would be on Team “this sounds weird and cringe, I’m not touching it”) and 2. is possibly not true (if computer experts ≠ fandom nerds in Chinese popular consciousness, it may be easier to get broad audiences to listen to a non-nerdy computer expert talking about A.I.).
I wonder if Yud would be willing to write a rat fic aimed at Chinese audiences? He seems to read a bunch of Xinxia, so he’s probably absorbed some of the memes of China’s male youth. Maybe a fanfic of “Oh my god! Earthlings are insane!” would be a good choice, based on my impression of the novel’s themes and what its readership is like.
EDIT: I think the rationality angle is important for making progress on AI safety, but I’m not sure which parts are necessary. Also, what part of HPMOR would make it especially bade for Chinese audiences? The libertarian sympathies? The trans-humanism doesn’t seem like it would be that harmful, given the popularity of novels Embers ad infinatum. Which is another novel that Yud could write a fanfic for.
The most common response I get when I talked to coworkers about AI risk wasn’t denial or an attempt to minimize the problem. It was generally something like “That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship.” And then a shrug before they went back to their tasks. I don’t see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.
I can’t really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It’s not something you can argue people out of. Just trust me on this one.
Is it “insanely cringe” for different reasons than it is “insanely cringe” for English audiences? I suspect most Americans, if exposed to it, would describe it as cringe. There is much about it that is cringe, and I say this with some love.
The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money!
Pay Terry Tao his 10 million dollars!
If that’s true I would assume that the people who work on creating the AI guidelines, understand the problem. This would in turn suggests that they take reasonable steps to address it.
Is your model that the people writing the guidelines would be well-intentioned but lack the political power to actually enforce useful guidelines?
Yeah, but a lot of people can say that w/o producing good work. There’s been a number of complaints about field building attempts bringing in low quality people who don’t make progress on the core problem. Now, I am NOT sayng you need to read the sequences and be a member of our cult to make progress. But the models in there do seem important to seeing what is hard about alignment. Now, many smart people have these models themselves, drawing from the same sources Yudkowsky did. But many smart people don’t have these models and bounce off of alignment.
I can sort of trust you that HPMOR is insanely cringe. I’m still not sure if a variant wouldn’t work, because I don’t have your model. Maybe I’ll talk to some Chinese friends about this and get their opinion. You may be living in a bubble and not realize it. It happens to everyone at some point, and China must have a lot of bubbles.
The private sentiment of folks who read through all of it would probably be some degree of ‘cringe’ too.
I couldn’t even make it halfway, though I am fascinated by imaginative fanfiction, as it becomes too much of a childish power fantasy to ignore and suspend my disbelief while reading.
Yeah, you’ve got a point there. And yet, HPMOR is popular. Lots of people love it, and got into the LW and the rat community that way. You yourself may not have, but that’s evidence in favour of high variance. So I remain unsure if something like HPMOR could work in China too. Why assume there’d be less variance in response there?
It’s about trade-offs. HPMOR/an equally cringey analogue will attract a certain sector of weird people into the community who can then be redirected towards A.I. stuff — but it will repel a majority of novices because it “taints” the A.I. stuff with cringiness by association.
This is a reasonable trade-off if:
the kind of weird people who’ll get into HPMOR are also the kind of weird people who’d be useful to A.I. safety;
the normies were already likely to dismiss the A.I. stuff with or without the added load of cringe.
In the West, 1. is true because there’s a strong association between techy people and niche fandom, so even though weird nerds are a minority, they might represent a substantial fraction of the people you want to reach. And 2. is kind of true for a related reason, which is that “nerds” are viewed as generally cringe even if they don’t specifically talk about HP fanfiction; it’s already assumed that someone who thinks about computers all days is probably the kind of cringe who’d be big into a semi-self-insert HP fanfiction.
But in China, from @Lao Mein’s testimony, 1. is definitely not true (a lot of the people we want to reach would be on Team “this sounds weird and cringe, I’m not touching it”) and 2. is possibly not true (if computer experts ≠ fandom nerds in Chinese popular consciousness, it may be easier to get broad audiences to listen to a non-nerdy computer expert talking about A.I.).
HPMOR is weird and attracts weird people.
Yudkowsky is not the right person to start this stuff in China.