I think the longtermist/rationalist EA memes/ecosystem were very likely causally responsible for some of the worst capabilities externalities in the last decade;
If you’re thinking of the work I’m thinking of, I think about zero of it came from people aiming at safety work and producing externalities, and instead about all of it was people in the community directly working on capabilities or capabilities-adjacent projects, with some justification or the other.
Yeah most of the things I’m thinking of didn’t look like technical safety stuff, more like Demis and Shane being concerned about safety → decided to found Deepmind, Eliezer introducing Demis and Shane to Peter Thiel ( their first funder), etc.
In terms of technical safety stuff, sign confusion around RLHF is probably the strongest candidate. I’m also a bit worried about capabilities externalities of Constitutional AI, for similar reasons. There’s also the general vibes issue of safety work (including quite technical work) and communications either making AI capabilities seem more cool* or seem less evil (depending on your framing).
EDIT to add: I feel like in Silicon Valley (and maybe elsewhere but I’m most familiar with Silicon Valley) there’s a certain vibe of coolness being more important than goodness, which feels childish to me but afaict seems like a real thing. This Altman tweet seems emblematic of that mindset.
I feel like in Silicon Valley (and maybe elsewhere but I’m most familiar with Silicon Valley) there’s a certain vibe of coolness being more important than goodness
Yeah, I definitely think this is true to some extent. “First get impact, then worry about the sign later” and all.
If you’re thinking of the work I’m thinking of, I think about zero of it came from people aiming at safety work and producing externalities, and instead about all of it was people in the community directly working on capabilities or capabilities-adjacent projects, with some justification or the other.
(personal opinions)
Yeah most of the things I’m thinking of didn’t look like technical safety stuff, more like Demis and Shane being concerned about safety → decided to found Deepmind, Eliezer introducing Demis and Shane to Peter Thiel ( their first funder), etc.
In terms of technical safety stuff, sign confusion around RLHF is probably the strongest candidate. I’m also a bit worried about capabilities externalities of Constitutional AI, for similar reasons. There’s also the general vibes issue of safety work (including quite technical work) and communications either making AI capabilities seem more cool* or seem less evil (depending on your framing).
EDIT to add: I feel like in Silicon Valley (and maybe elsewhere but I’m most familiar with Silicon Valley) there’s a certain vibe of coolness being more important than goodness, which feels childish to me but afaict seems like a real thing. This Altman tweet seems emblematic of that mindset.
Yeah, I definitely think this is true to some extent. “First get impact, then worry about the sign later” and all.