This post is spot-on about basically everything it covers, and I’m really, really glad to see that someone like you thought of at least half of this on your own, discovering it independently. It’s really good news that we have thinkers like that here.
The one thing that is not spot-on is the claim that “politics probably aren’t as hard as you think”. Politics are much harder, more hostile/malevolent, less predictable, and more evil than they appear. We didn’t have to be born in a timeline where AI alignment was ever conceived of at all, in the first place, as opposed to being born on a timeline where people built AI but the concept of the Control Problem never occurred to anyone. So I think we’re very fortunate that the concept of AI alignment exists in the first place, and it would be such an unfortunate waste if the whole enchilada were to be eviscerated by the political scene.
AI governance, and governance in general, is immensely complicated and full of self-interested and outright vicious people. Many of them are also extremely smart, competent, and/or paranoid about others encroaching on their little empire that they spend their entire lives building for themselves, brick by brick, such as J. Edgar Hoover. Any really good idea of governance is probably full of these random, unforseeable “aha” moments that completely invalidate the entire good idea, because some random factor that most smart people couldn’t possibly have reasonably anticipated.
Please don’t be discouraged, this is an uncharacteristically high-quality post on AI governance and I look forward to seeing more from you in the future. I’ve learned a lot from it and many others have too.
I recommend contributing to the $20k AI alignment rhetoric and one-liner contest, it needs more entries from competent people like you who know what they’re talking about. It was forced off the front page by a bunch of naive people who know nothing about the situation with governance, so very few people are aware of the existence of that contest. If you (or anyone, really) put in 30 minutes thinking of a sorta clever quote (or just finding one) that can convince policymakers that AI alignment is a big deal, you will probably end up with $500 in your pocket; that’s how badly the contest is neglected right now.
Thanks! FWIW part of the point here is that “AI Governance” includes (but is not limited to) “real politics”, which I assume are as bad / worse as everyone here does. Hence the examples section mostly being NGOs.
And thanks for letting me know about the contest, is there a limit on number of submissions? (EDIT: there appears to not be a limit beyond whatever LW already uses for spam filtering, ofc). I can write a lot of quotes for $500.
That’s good that you’re willing to make a lot of submissions for $500, because at the way things are going, you’ll probably get $500 per submission for several submissions.
This post is spot-on about basically everything it covers, and I’m really, really glad to see that someone like you thought of at least half of this on your own, discovering it independently. It’s really good news that we have thinkers like that here.
The one thing that is not spot-on is the claim that “politics probably aren’t as hard as you think”. Politics are much harder, more hostile/malevolent, less predictable, and more evil than they appear. We didn’t have to be born in a timeline where AI alignment was ever conceived of at all, in the first place, as opposed to being born on a timeline where people built AI but the concept of the Control Problem never occurred to anyone. So I think we’re very fortunate that the concept of AI alignment exists in the first place, and it would be such an unfortunate waste if the whole enchilada were to be eviscerated by the political scene.
AI governance, and governance in general, is immensely complicated and full of self-interested and outright vicious people. Many of them are also extremely smart, competent, and/or paranoid about others encroaching on their little empire that they spend their entire lives building for themselves, brick by brick, such as J. Edgar Hoover. Any really good idea of governance is probably full of these random, unforseeable “aha” moments that completely invalidate the entire good idea, because some random factor that most smart people couldn’t possibly have reasonably anticipated.
Please don’t be discouraged, this is an uncharacteristically high-quality post on AI governance and I look forward to seeing more from you in the future. I’ve learned a lot from it and many others have too.
I recommend contributing to the $20k AI alignment rhetoric and one-liner contest, it needs more entries from competent people like you who know what they’re talking about. It was forced off the front page by a bunch of naive people who know nothing about the situation with governance, so very few people are aware of the existence of that contest. If you (or anyone, really) put in 30 minutes thinking of a sorta clever quote (or just finding one) that can convince policymakers that AI alignment is a big deal, you will probably end up with $500 in your pocket; that’s how badly the contest is neglected right now.
Thanks! FWIW part of the point here is that “AI Governance” includes (but is not limited to) “real politics”, which I assume are as bad / worse as everyone here does. Hence the examples section mostly being NGOs.
And thanks for letting me know about the contest,
is there a limit on number of submissions?(EDIT: there appears to not be a limit beyond whatever LW already uses for spam filtering, ofc). I can write a lot of quotes for $500.That’s good that you’re willing to make a lot of submissions for $500, because at the way things are going, you’ll probably get $500 per submission for several submissions.