Do you have any thoughts on what this actionably means? For me it seems a bit like being able to influence such coversations is potentially a bit intractable but maybe one could host forums and events for this if one has the right network?
I think it’s a good point and I’m wondering about how it actionably looks, I can see it for someone with the right contacts and so the message for people who don’t have that is to create it or what are your thoughts there?
This is a great question. I think what we can do is spread good logic about AGI risks. That is tricky. Outside of the LW audience, getting the emotional resonance right is more important than being logically correct. And that’s a whole different skill.
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences. Leahy is better but may also be making things worse by occasionally losing his cool and coming off as a bit of an asshole. People then associate the whole idea of AI safety with “these guys who talk down to us and seem mean and angry”. Then motivated reasoning kicks in and they’re oriented to trying to prove them wrong instead of discover the truth.
That doesn’t mean logical arguments don’t count with normies; they do. But the logic comes into play lots more when you’re not counted as dangerous or an enemy by emotional processing.
So just repeating the basic arguments of “something smarter will treat us like we do animals by default” and “surely we all want the things we love now to survive AGI” while also being studiously nice is my best guess at the right approach.
I struggle to do this myself; it’s super frustrating to repeatedly be in conversations where people seem to be obstinately refusing to think about some pretty basic and obvious logic.
Maybe the logic will win out even if we’re not able to be nice about it, but I’m quite sure it will win out faster if we can be
Repetition counts. Any worriers with any access to public platforms should probably be speaking publicly about this—as long as they’re trying hard to be nice.
Edit: to bring it back to this particular type of scenario: when someone says “let it rip, I don’t care if the winners aren’t human!” is the most important time to be nice and get curious instead of pointing out how stupid this take is. Just asking questions is going to lead most people to realize that actually they do value human-like consciousness and pleasant experiences, not just progress and competition in a disneyland without children (ref at the end).
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences.
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)
Do you have any thoughts on what this actionably means? For me it seems a bit like being able to influence such coversations is potentially a bit intractable but maybe one could host forums and events for this if one has the right network?
I think it’s a good point and I’m wondering about how it actionably looks, I can see it for someone with the right contacts and so the message for people who don’t have that is to create it or what are your thoughts there?
This is a great question. I think what we can do is spread good logic about AGI risks. That is tricky. Outside of the LW audience, getting the emotional resonance right is more important than being logically correct. And that’s a whole different skill.
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences. Leahy is better but may also be making things worse by occasionally losing his cool and coming off as a bit of an asshole. People then associate the whole idea of AI safety with “these guys who talk down to us and seem mean and angry”. Then motivated reasoning kicks in and they’re oriented to trying to prove them wrong instead of discover the truth.
That doesn’t mean logical arguments don’t count with normies; they do. But the logic comes into play lots more when you’re not counted as dangerous or an enemy by emotional processing.
So just repeating the basic arguments of “something smarter will treat us like we do animals by default” and “surely we all want the things we love now to survive AGI” while also being studiously nice is my best guess at the right approach.
I struggle to do this myself; it’s super frustrating to repeatedly be in conversations where people seem to be obstinately refusing to think about some pretty basic and obvious logic.
Maybe the logic will win out even if we’re not able to be nice about it, but I’m quite sure it will win out faster if we can be
Repetition counts. Any worriers with any access to public platforms should probably be speaking publicly about this—as long as they’re trying hard to be nice.
Edit: to bring it back to this particular type of scenario: when someone says “let it rip, I don’t care if the winners aren’t human!” is the most important time to be nice and get curious instead of pointing out how stupid this take is. Just asking questions is going to lead most people to realize that actually they do value human-like consciousness and pleasant experiences, not just progress and competition in a disneyland without children (ref at the end).
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
Good suggestion, thanks and I’ll do that.
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)