For what it’s worth, this is what the actual conflict looks like to me. I apologize if I sound bitter in the following.
LessWrong (et EA) has had a lot of people interested in AI, over history. A big chunk of these have been those with (1) short timelines and (2) high existential doom-percentages, but have by no means been the only people in LessWrong.
There were also people with longer timelines, or ~0.1% doom percentages, who nevertheless thought it would be good to work on as a tail risk. There were also people who were intrigued by the intellectual challenge of understanding intelligence. There were also people who were more concerned about risks from multipolar situations. There were even people just interested in rationality. All these together made up kinda the “big tent” of LW.
Over the last few months months though, there has been a concerted push to get regulations on the board now, which seems to come from people with short timelines and high p-doom. This leads to the following frictions:
I think in many cases (not merely CAIP), they are pushing for things that would shred a lot of things the “big tent” coalition in LW would care about, to guard against dangers that many people in the big tent coalition don’t think are dangers. When they talk about bad side-effects of their policies, it’s almost solely to explicitly downplay them. (I could point to other places where EAs have [imo, obviously falsely] downplayed the costs of their proposed regulations.) This feels like a betrayal of intellectual standards.
They’ve introduced terminology created for negative connotative load rather than denotative clarity and put it everywhere (“AI proliferation”), which pains me every time I read it. This feels like a betrayal of intellectual standards.
They’ve started writing a quantity of “introductory material” which is explicitly politically tilted, and I think really bad for noobs because it exists to sell a story rather than to describe the situation. I.e., I think Yud’s last meditation on LLMs is probably just harmful / confusing for a noob to ML to read; the Letter to Time obviously aims to persuade not explain; the Rational Animations “What to authorities have to say on AI risk” is for sure tilted, and even other sources (can’t find PDF at moment) sell dubious “facts” like “capabilities are growing faster than our ability to control.” This also feels like a betrayal of intellectual standards.
I’m sorry I don’t have more specific examples of the above; I’m trying to complete this comment in a limited time.
I realize in many places I’m just complaining about people on the internet being wrong. But a fair chunk of the above is coming not merely from randos on the internet but from the heads of EA-funded and EA-sponsored or now LW-sponsored organizations. And this has basically made me think, “Nope, no one in these places actually—like actually—gives a shit about what I care about. They don’t even give a shit about rationality, except inasmuch as it serves their purposes. They’re not even going to investigate downsides to what they propose.”
And it looks to me like the short timeline / high pdoom group are collectively telling what was the big tent coalition to “get with the program”—as, for instance, Zvi has chided Jack Clark, for being insufficiently repressive. And well, that’s like… not going to fly with people who weren’t convinced by your arguments in the first place. They’re going to look around at each other, be like “did you hear that?”, and try to find other places that value what they value, that make arguments that they think make sense, and that they feel are more intellectually honest.
It’s fun and intellectually engaging to be in a community where people disagree with each other. It sucks to be in a community where people are pushing for (what you think are) bad policies that you disagree with, and turning that community into a vehicle for pushing those policies. The disagreement loses the fun and savor.
I would like to be able to read political proposals from EA or LW funded institutions and not automatically anticipate that they will hide things from me. I would like to be able to read summaries of AI risk which advert to both strengths and weaknesses in such arguments. I would like things I post on LW to not feed a community whose chief legislative impact looks right now to be solely adding stupidly conceived regulations to the lawbooks.
I’m sorry I sound bitter. This is what I’m actually concerned about.
Edit: shoulda responded to your top level, whatever.
This is a good comment, and I think describes some of what is going on. I also feel concerned about some of those dynamics, though I do have high p-doom (and like 13 year timelines, which I think is maybe on the longer side these days, so not sure where I fall here in your ontology).
I disagree a lot with the examples you list that you say are deceiving or wrong. Like, I do think capabilities are growing faster than our ability to control, and that feels like a fine summary of the situation (though also not like an amazing one).
I also personally don’t care much about “the big tent” coalition. I care about saying what I believe. I don’t want to speak on behalf of others, but I also really don’t want to downplay what I believe because other people think that will make them look bad.
Independently of my commitment to not join mutual reputation protection alliances, my sense is most actions that have been taken so far by people vaguely in the LW/EA space in the public sphere and the policy sphere have been quite harmful (and e.g. involved giving huge amounts of power and legitimacy to AI capability companies), so I don’t feel much responsibility to coordinate with or help the people who made that happen. I like many of those people, and think they are smart, and I like talking to them and sometimes learn things from them, but I don’t think I owe them much in terms of coordinating our public messaging on AI, or something like that (though I do owe them not speaking on their behalf, and I do think a lot of people could do much better to speak more on behalf of themselves and less on behalf of ‘the AI safety community’).
For what it’s worth, this is what the actual conflict looks like to me. I apologize if I sound bitter in the following.
LessWrong (et EA) has had a lot of people interested in AI, over history. A big chunk of these have been those with (1) short timelines and (2) high existential doom-percentages, but have by no means been the only people in LessWrong.
There were also people with longer timelines, or ~0.1% doom percentages, who nevertheless thought it would be good to work on as a tail risk. There were also people who were intrigued by the intellectual challenge of understanding intelligence. There were also people who were more concerned about risks from multipolar situations. There were even people just interested in rationality. All these together made up kinda the “big tent” of LW.
Over the last few months months though, there has been a concerted push to get regulations on the board now, which seems to come from people with short timelines and high p-doom. This leads to the following frictions:
I think in many cases (not merely CAIP), they are pushing for things that would shred a lot of things the “big tent” coalition in LW would care about, to guard against dangers that many people in the big tent coalition don’t think are dangers. When they talk about bad side-effects of their policies, it’s almost solely to explicitly downplay them. (I could point to other places where EAs have [imo, obviously falsely] downplayed the costs of their proposed regulations.) This feels like a betrayal of intellectual standards.
They’ve introduced terminology created for negative connotative load rather than denotative clarity and put it everywhere (“AI proliferation”), which pains me every time I read it. This feels like a betrayal of intellectual standards.
They’ve started writing a quantity of “introductory material” which is explicitly politically tilted, and I think really bad for noobs because it exists to sell a story rather than to describe the situation. I.e., I think Yud’s last meditation on LLMs is probably just harmful / confusing for a noob to ML to read; the Letter to Time obviously aims to persuade not explain; the Rational Animations “What to authorities have to say on AI risk” is for sure tilted, and even other sources (can’t find PDF at moment) sell dubious “facts” like “capabilities are growing faster than our ability to control.” This also feels like a betrayal of intellectual standards.
I’m sorry I don’t have more specific examples of the above; I’m trying to complete this comment in a limited time.
I realize in many places I’m just complaining about people on the internet being wrong. But a fair chunk of the above is coming not merely from randos on the internet but from the heads of EA-funded and EA-sponsored or now LW-sponsored organizations. And this has basically made me think, “Nope, no one in these places actually—like actually—gives a shit about what I care about. They don’t even give a shit about rationality, except inasmuch as it serves their purposes. They’re not even going to investigate downsides to what they propose.”
And it looks to me like the short timeline / high pdoom group are collectively telling what was the big tent coalition to “get with the program”—as, for instance, Zvi has chided Jack Clark, for being insufficiently repressive. And well, that’s like… not going to fly with people who weren’t convinced by your arguments in the first place. They’re going to look around at each other, be like “did you hear that?”, and try to find other places that value what they value, that make arguments that they think make sense, and that they feel are more intellectually honest.
It’s fun and intellectually engaging to be in a community where people disagree with each other. It sucks to be in a community where people are pushing for (what you think are) bad policies that you disagree with, and turning that community into a vehicle for pushing those policies. The disagreement loses the fun and savor.
I would like to be able to read political proposals from EA or LW funded institutions and not automatically anticipate that they will hide things from me. I would like to be able to read summaries of AI risk which advert to both strengths and weaknesses in such arguments. I would like things I post on LW to not feed a community whose chief legislative impact looks right now to be solely adding stupidly conceived regulations to the lawbooks.
I’m sorry I sound bitter. This is what I’m actually concerned about.
Edit: shoulda responded to your top level, whatever.
This is a good comment, and I think describes some of what is going on. I also feel concerned about some of those dynamics, though I do have high p-doom (and like 13 year timelines, which I think is maybe on the longer side these days, so not sure where I fall here in your ontology).
I disagree a lot with the examples you list that you say are deceiving or wrong. Like, I do think capabilities are growing faster than our ability to control, and that feels like a fine summary of the situation (though also not like an amazing one).
I also personally don’t care much about “the big tent” coalition. I care about saying what I believe. I don’t want to speak on behalf of others, but I also really don’t want to downplay what I believe because other people think that will make them look bad.
Independently of my commitment to not join mutual reputation protection alliances, my sense is most actions that have been taken so far by people vaguely in the LW/EA space in the public sphere and the policy sphere have been quite harmful (and e.g. involved giving huge amounts of power and legitimacy to AI capability companies), so I don’t feel much responsibility to coordinate with or help the people who made that happen. I like many of those people, and think they are smart, and I like talking to them and sometimes learn things from them, but I don’t think I owe them much in terms of coordinating our public messaging on AI, or something like that (though I do owe them not speaking on their behalf, and I do think a lot of people could do much better to speak more on behalf of themselves and less on behalf of ‘the AI safety community’).
Did you swap your word ordering, or does this not belong on that list?