You have a very strange understanding of politics, wherein you have laymen who want to advance their interests at the expense of other peoples, who realize that would be unpopular if they stated exactly what they were doing, and then as a consequence lie on Twitter about needing to do it for a different reason. This is insufficiently cynical. People almost always genuinely think their political platform is the Right and Just one. It’s just that people also have strong instincts to align themselves with whatever tribe is saying they’re the Good Guys, and that giving the Good Guys money is going to be great for the economy, and that the Bad Guys deserve to get their money taken away from them, the greedy pricks.
So the tradeoff you mention in your problem statement is simply too abstract for these instincts to kick in, even if it were real. The instincts kick in for Yann Lecun or Marc Andreesen, because their hindbrains don’t have to spend any extra time to see the obvious point that they lose near term prestige if AI research is considered evil. But I doubt either of them are losing sleep over this because they’re making a calculation about how likely it is they’ll be immortal; that’s the sort of high-concept tradeoff that people simply don’t organize their entire politics around, and just end up finding cartoon-villanous.
You are right, there are three possible avenues of approaching this: (1) people have certain goals and lie about them to advance their interests, (2) people have certain goals, and they self-delude about their true content so that they advance their interests, (3) people don’t have any goals, they are simply executing certain heuristics that proved to be useful in-distribution (Reward is not an optimisation target approach), I omitted the last one from the post. But think that my observation about (2) having non-zero chance of explaining variance in opinions still stands true. And this is even more true for people engaged in AI safety, such as members of Pause AI, e/acc and (to a lesser extent) academics doing research on AI.
Even if (3) has more explanatory power, it doesn’t really defat the central point of the post, which is the ought question (which is a bit of a evasive answer, I admit).
My limited impression of “e/accs”, and you may think this is unfair, is that most of them seem not to have any gears-level model of the problem at all, and have instead claimed the mantle because they decided amongst each other it’s the attire of futurism and libertarianism. George Hotz will show up to the Eliezer/Leahy debates with a giant American flag in the background and blurt out stuff like “Somalia is my preferred country”, not because he’s actually going to live there, but because he thinks that sounds based and the point of the discussion for him is to wave a jersey in the air. I don’t think Hotz has made the expected value calculation you mention because I don’t think he’s even really gotten to the point of developing an inside view in the first place.
In other words, they are based-tuned stochastic parrots? Seems harsh, but the Hotz-Yudkowsky ‘debate’ can only be explained by something in the vicinity of this hypothesis AFAICT (haven’t seen others).
You have a very strange understanding of politics, wherein you have laymen who want to advance their interests at the expense of other peoples, who realize that would be unpopular if they stated exactly what they were doing, and then as a consequence lie on Twitter about needing to do it for a different reason. This is insufficiently cynical. People almost always genuinely think their political platform is the Right and Just one. It’s just that people also have strong instincts to align themselves with whatever tribe is saying they’re the Good Guys, and that giving the Good Guys money is going to be great for the economy, and that the Bad Guys deserve to get their money taken away from them, the greedy pricks.
So the tradeoff you mention in your problem statement is simply too abstract for these instincts to kick in, even if it were real. The instincts kick in for Yann Lecun or Marc Andreesen, because their hindbrains don’t have to spend any extra time to see the obvious point that they lose near term prestige if AI research is considered evil. But I doubt either of them are losing sleep over this because they’re making a calculation about how likely it is they’ll be immortal; that’s the sort of high-concept tradeoff that people simply don’t organize their entire politics around, and just end up finding cartoon-villanous.
You are right, there are three possible avenues of approaching this: (1) people have certain goals and lie about them to advance their interests, (2) people have certain goals, and they self-delude about their true content so that they advance their interests, (3) people don’t have any goals, they are simply executing certain heuristics that proved to be useful in-distribution (Reward is not an optimisation target approach), I omitted the last one from the post. But think that my observation about (2) having non-zero chance of explaining variance in opinions still stands true. And this is even more true for people engaged in AI safety, such as members of Pause AI, e/acc and (to a lesser extent) academics doing research on AI.
Even if (3) has more explanatory power, it doesn’t really defat the central point of the post, which is the ought question (which is a bit of a evasive answer, I admit).
My limited impression of “e/accs”, and you may think this is unfair, is that most of them seem not to have any gears-level model of the problem at all, and have instead claimed the mantle because they decided amongst each other it’s the attire of futurism and libertarianism. George Hotz will show up to the Eliezer/Leahy debates with a giant American flag in the background and blurt out stuff like “Somalia is my preferred country”, not because he’s actually going to live there, but because he thinks that sounds based and the point of the discussion for him is to wave a jersey in the air. I don’t think Hotz has made the expected value calculation you mention because I don’t think he’s even really gotten to the point of developing an inside view in the first place.
In other words, they are based-tuned stochastic parrots? Seems harsh, but the Hotz-Yudkowsky ‘debate’ can only be explained by something in the vicinity of this hypothesis AFAICT (haven’t seen others).