Yeah—I love AI_WAIFU’s comment, but I love the OP too.
To some extent I think these are just different strategies that will work better for different people; both have failure modes, and Eliezer is trying to guard against the failure modes of ‘Fuck That Noise’ (e.g., losing sight of reality), while AI_WAIFU is trying to guard against the failure modes of ‘Try To Die With More Dignity’ (e.g., losing motivation).
My general recommendation to people would be to try different framings / attitudes out and use the ones that empirically work for them personally, rather than trying to have the same lens as everyone else. I’m generally a skeptic of advice, because I think people vary a lot; so I endorse the meta-advice that you should be very picky about which advice you accept, and keep in mind that you’re the world’s leading expert on yourself. (Or at least, you’re in the best position to be that thing.)
Cf. ‘Detach the Grim-o-Meter’ versus ‘Try to Feel the Emotions that Match Reality’. Both are good advice in some contexts, for some people; but I think there’s some risk from taking either strategy too far, especially if you aren’t aware of the other strategy as a viable option.
Please correct me if I am wrong, but a huge difference between Eliezer’s post and AI_WAIFU’s comment is that Eliezer’s post is informed by conversations with dozens of people about the problem.
I interpreted AI_WAIFU as pushing back against a psychological claim (‘X is the best attitude for mental clarity, motivation, etc.’), not as pushing back against a AI-related claim like P(doom). Are you interpreting them as disagreeing about P(doom)? (If not, then I don’t understand your comment.)
If (counterfactually) they had been arguing about P(doom), I’d say: I don’t know AI_WAIFU’s level of background. I have a very high opinion of Eliezer’s thinking about AI (though keep in mind that I’m a co-worker of his), but EY is still some guy who can be wrong about things, and I’m interested to hear counter-arguments against things like P(doom). AGI forecasting and alignment are messy, pre-paradigmatic fields, so I think it’s easier for field founders and authorities to get stuff wrong than it would be in a normal scientific field.
The specific claim that Eliezer’s P(doom) is “informed by conversations with dozens of people about the problem” (if that’s what you were claiming) seems off to me. Like, it may be technically true under some interpretation, but (a) I think of Eliezer’s views as primarily based on his own models, (b) I’d tentatively guess those models are much more based on things like ‘reading textbooks’ and ‘thinking things through himself’ than on ‘insights gleaned during back-and-forth discussions with other people’, and (c) most people working full-time on AI alignment have far lower P(doom) than Eliezer.
Sorry for the lack of clarity. I share Eliezer’s pessimism about the global situation (caused by rapid progress in AI). All I meant is that I see signs in his writings that over the last 15 years Eliezer has spent many hours trying to help at least a dozen different people become effective at trying to improve the horrible situation we are currently in. That work experience makes me pay much greater attention to him on the subject at hand than someone I know nothing about.
Ah, I see. I think Eliezer has lots of relevant experience and good insights, but I still wouldn’t currently recommend the ‘Death with Dignity’ framing to everyone doing good longtermist work, because I just expect different people’s minds to work very differently.
Assuming this is correct (certainly it is of Eliezer, though I don’t know AI_WAIFU’s background and perhaps they have had similar conversations), does it matter? WAIFU’s point is that we should continue trying as a matter of our terminal values; that’s not something that can be wrong due to the problem being difficult.
I agree, but do not perceive Eliezer as having stopped trying or as advising others to stop trying, er, except of course for the last section of this post (“Q6: . . . All of this is just an April Fool’s joke, right?”) but that is IMHO addressed to a small fraction of his audience.
I don’t want to speak for him (especially when he’s free to clarify himself far better than we could do for him!), but dying with dignity conveys an attitude that might be incompatible with actually winning. Maybe not; sometimes abandoning the constraint that you have to see a path to victory makes it easier to do the best you can. But it feels concerning on an instinctive level.
Some people can think there’s next to no chance and yet go out swinging. I plan to, if I reach the point of feeling hopeless.
Yeah—I love AI_WAIFU’s comment, but I love the OP too.
To some extent I think these are just different strategies that will work better for different people; both have failure modes, and Eliezer is trying to guard against the failure modes of ‘Fuck That Noise’ (e.g., losing sight of reality), while AI_WAIFU is trying to guard against the failure modes of ‘Try To Die With More Dignity’ (e.g., losing motivation).
My general recommendation to people would be to try different framings / attitudes out and use the ones that empirically work for them personally, rather than trying to have the same lens as everyone else. I’m generally a skeptic of advice, because I think people vary a lot; so I endorse the meta-advice that you should be very picky about which advice you accept, and keep in mind that you’re the world’s leading expert on yourself. (Or at least, you’re in the best position to be that thing.)
Cf. ‘Detach the Grim-o-Meter’ versus ‘Try to Feel the Emotions that Match Reality’. Both are good advice in some contexts, for some people; but I think there’s some risk from taking either strategy too far, especially if you aren’t aware of the other strategy as a viable option.
Please correct me if I am wrong, but a huge difference between Eliezer’s post and AI_WAIFU’s comment is that Eliezer’s post is informed by conversations with dozens of people about the problem.
I interpreted AI_WAIFU as pushing back against a psychological claim (‘X is the best attitude for mental clarity, motivation, etc.’), not as pushing back against a AI-related claim like P(doom). Are you interpreting them as disagreeing about P(doom)? (If not, then I don’t understand your comment.)
If (counterfactually) they had been arguing about P(doom), I’d say: I don’t know AI_WAIFU’s level of background. I have a very high opinion of Eliezer’s thinking about AI (though keep in mind that I’m a co-worker of his), but EY is still some guy who can be wrong about things, and I’m interested to hear counter-arguments against things like P(doom). AGI forecasting and alignment are messy, pre-paradigmatic fields, so I think it’s easier for field founders and authorities to get stuff wrong than it would be in a normal scientific field.
The specific claim that Eliezer’s P(doom) is “informed by conversations with dozens of people about the problem” (if that’s what you were claiming) seems off to me. Like, it may be technically true under some interpretation, but (a) I think of Eliezer’s views as primarily based on his own models, (b) I’d tentatively guess those models are much more based on things like ‘reading textbooks’ and ‘thinking things through himself’ than on ‘insights gleaned during back-and-forth discussions with other people’, and (c) most people working full-time on AI alignment have far lower P(doom) than Eliezer.
Sorry for the lack of clarity. I share Eliezer’s pessimism about the global situation (caused by rapid progress in AI). All I meant is that I see signs in his writings that over the last 15 years Eliezer has spent many hours trying to help at least a dozen different people become effective at trying to improve the horrible situation we are currently in. That work experience makes me pay much greater attention to him on the subject at hand than someone I know nothing about.
Ah, I see. I think Eliezer has lots of relevant experience and good insights, but I still wouldn’t currently recommend the ‘Death with Dignity’ framing to everyone doing good longtermist work, because I just expect different people’s minds to work very differently.
Assuming this is correct (certainly it is of Eliezer, though I don’t know AI_WAIFU’s background and perhaps they have had similar conversations), does it matter? WAIFU’s point is that we should continue trying as a matter of our terminal values; that’s not something that can be wrong due to the problem being difficult.
I agree, but do not perceive Eliezer as having stopped trying or as advising others to stop trying, er, except of course for the last section of this post (“Q6: . . . All of this is just an April Fool’s joke, right?”) but that is IMHO addressed to a small fraction of his audience.
I don’t want to speak for him (especially when he’s free to clarify himself far better than we could do for him!), but dying with dignity conveys an attitude that might be incompatible with actually winning. Maybe not; sometimes abandoning the constraint that you have to see a path to victory makes it easier to do the best you can. But it feels concerning on an instinctive level.
In my experience, most people cannot.