Despite the disagree votes, I actually think this is a reasonable suggestion– and indeed more people should be asking “why doesn’t X policy person go around trying to explain superintelligence risks?”
I think the disagree votes are probably coming from the vibe that this would be easy/simple to do. Short (in length) is not the same as easy.
My own answer is some mix of fear that I’m not the right person, some doubts about which points to emphasize, and some degree of “I may start working more deliberately on this soon in conjunction with someone else who will hopefully be able to address some of my blind spots.”
“why not just” is a standard phrase for saying what you’re proposing would be simple or come naturally if you try. Combined with the rest of the comment talking about straightforwardness and how little word count, and it does give off a somewhat combatitive vibe.
I agree with your suggestion and it is good to hear that you don’t intend it imply that it is simple, so maybe it would be worth editing the original comment to prevent miscommunication for people who haven’t read it yet. For the time being I’ve strong-agreed with your comment to save it from a negativity snowball effect.
Good comms for people who don’t share your background assumptions is often really hard!
That said I’d definitely encourage Akash and other people who understand both the AI safety arguments and policymakers to try to convey this well.
Maybe I’ll take a swing at this myself at some point soon; I suspect I don’t really know what policymakers’ cruxes were or how to speak their language but at least I’ve lived in DC before.
Then this seems to be an entirely different problem?
At the very least, resolving substantial differences in background assumptions is going to take a lot more than a ‘short presentation’.
And it’s very likely those in actual decision making positions will be much less charitable than me, since their secretaries receive hundreds or thousands of such petitions every week.
I’m not suggesting to the short argument should resolve those background assumptions, I’m suggesting that a good argument for people who don’t share those assumptions roughly entails being able to understand someone else’s assumptions well enough to speak their language and craft a persuasive and true argument on their terms.
Why not just create this ‘short presentation’ yourself?
It probably wouldn’t even have half the word count of this comment you’ve already written, and should be much more persuasive than the whole thing.
I don’t want to pick on you specifically, but it’s hard to ignore the most direct and straightforward solution to the problems identified.
Despite the disagree votes, I actually think this is a reasonable suggestion– and indeed more people should be asking “why doesn’t X policy person go around trying to explain superintelligence risks?”
I think the disagree votes are probably coming from the vibe that this would be easy/simple to do. Short (in length) is not the same as easy.
My own answer is some mix of fear that I’m not the right person, some doubts about which points to emphasize, and some degree of “I may start working more deliberately on this soon in conjunction with someone else who will hopefully be able to address some of my blind spots.”
I don’t think my comment gave off the vibe that it is ‘easy/simple’ to do, just that it isn’t as much of a long shot as the alternative.
i.e. Waiting for someone smarter, more competent, politically savvier, etc…, to read your comment and then hoping for them to do it.
Which seems to have a very low probability of happening.
“why not just” is a standard phrase for saying what you’re proposing would be simple or come naturally if you try. Combined with the rest of the comment talking about straightforwardness and how little word count, and it does give off a somewhat combatitive vibe.
I agree with your suggestion and it is good to hear that you don’t intend it imply that it is simple, so maybe it would be worth editing the original comment to prevent miscommunication for people who haven’t read it yet. For the time being I’ve strong-agreed with your comment to save it from a negativity snowball effect.
Good comms for people who don’t share your background assumptions is often really hard!
That said I’d definitely encourage Akash and other people who understand both the AI safety arguments and policymakers to try to convey this well.
Maybe I’ll take a swing at this myself at some point soon; I suspect I don’t really know what policymakers’ cruxes were or how to speak their language but at least I’ve lived in DC before.
Then this seems to be an entirely different problem?
At the very least, resolving substantial differences in background assumptions is going to take a lot more than a ‘short presentation’.
And it’s very likely those in actual decision making positions will be much less charitable than me, since their secretaries receive hundreds or thousands of such petitions every week.
I’m not suggesting to the short argument should resolve those background assumptions, I’m suggesting that a good argument for people who don’t share those assumptions roughly entails being able to understand someone else’s assumptions well enough to speak their language and craft a persuasive and true argument on their terms.