I totally sympathize with and share the despair that many people feel about our governments’ inadequacy to make the right decisions on AI, or even far easier issues like covid-19.
What I don’t understand is why this isn’t paired with a greater enthusiasm for supporting governance innovation/experimentation, in the hopes of finding better institutional structures that COULD have a fighting chance to make good decisions about AI.
Obviously “fix governance” is a long-term project and AI might be a near-term problem. But I still think the idea of improving institutional decision-making could be a big help in scenarios where AI takes longer than expected or government reform happens quicker than expected. In EA, “improving institutional decisionmaking” has come to mean incremental attempts to influence existing institutions by, eg, passing weaksauce “future generations” climate bills. What I think EA should be doing much more is supporting experiments with radical Dath-Ilan-style institutions (charter cities, liquid democracy, futarchy, etc) in a decentralized hits-based way, and hoping that the successful experiments spread and help improve governance (ie, getting many countries to adopt prediction markets and then futarchy) in time to be helpful for AI.
I’ve written much more about this in my prize-winning entry to the Future of Life Institute’s “AI worldbuilding competition” (which prominently features a “warning shot” that helps catalyze action, in a near-future where governance has already been improved by partial adoption of Dath-Ilan-style institutions), and I’d be happy to talk about this more with interested folks: https://www.lesswrong.com/posts/qo2hqf2ha7rfgCdjY/a-bridge-to-dath-ilan-improved-governance-on-the-critical
There is some ambient support for Phil-Tetlock-style forecasting stuff like Metaculus, and some ambient support for prediction markets, definitely. But the vision here tends to be limited, mostly focused on “let’s get better forecasting done on EA relevant questions/topics”, not “scale up prediction markets until they are the primary way that society answers important questions in many fields”.
There isn’t huge effort going into future generations bills from within EA (the most notable post is complaining about them, not advocating them! https://forum.effectivealtruism.org/posts/TSZHvG7eGdmXCGhgS/concerns-with-the-wellbeing-of-future-generations-bill-1 ), although a lot of lefty- and climate-oriented EAs like them. But what I meant by that comment is just that EA has interpreted “improving institutional decisionmaking” to mean seeking influence within existing institutions, while I think there should be a second pillar of the cause area devoted to piloting totally new ideas in governance.
As an example of another idea that I think should get more EA attention and funding, Charter Cities have sometimes received an unduly chilly reception on the Forum (https://forum.effectivealtruism.org/posts/EpaSZWQkAy9apupoD/intervention-report-charter-cities), miscategorized as merely a neartermist economic-growth-boosting intervention, wheras charter city advocates are often most excited about their potential for experimental improvements in governance and leading to more “governance competition” among nations.
It was heartening to see the list of focus areas of the FTX future fund—they seem more interested in institution design and progress-studies-esque ideas than the rest of the EA ecosystem, which I think is great.
I totally sympathize with and share the despair that many people feel about our governments’ inadequacy to make the right decisions on AI, or even far easier issues like covid-19.
What I don’t understand is why this isn’t paired with a greater enthusiasm for supporting governance innovation/experimentation, in the hopes of finding better institutional structures that COULD have a fighting chance to make good decisions about AI.
Obviously “fix governance” is a long-term project and AI might be a near-term problem. But I still think the idea of improving institutional decision-making could be a big help in scenarios where AI takes longer than expected or government reform happens quicker than expected. In EA, “improving institutional decisionmaking” has come to mean incremental attempts to influence existing institutions by, eg, passing weaksauce “future generations” climate bills. What I think EA should be doing much more is supporting experiments with radical Dath-Ilan-style institutions (charter cities, liquid democracy, futarchy, etc) in a decentralized hits-based way, and hoping that the successful experiments spread and help improve governance (ie, getting many countries to adopt prediction markets and then futarchy) in time to be helpful for AI.
I’ve written much more about this in my prize-winning entry to the Future of Life Institute’s “AI worldbuilding competition” (which prominently features a “warning shot” that helps catalyze action, in a near-future where governance has already been improved by partial adoption of Dath-Ilan-style institutions), and I’d be happy to talk about this more with interested folks: https://www.lesswrong.com/posts/qo2hqf2ha7rfgCdjY/a-bridge-to-dath-ilan-improved-governance-on-the-critical
Metaculus was created by EAs. Manifold Market was also partly funded by EA money.
What EA money goes currently into “passing weaksauce “future generations” climate bills”?
There is some ambient support for Phil-Tetlock-style forecasting stuff like Metaculus, and some ambient support for prediction markets, definitely. But the vision here tends to be limited, mostly focused on “let’s get better forecasting done on EA relevant questions/topics”, not “scale up prediction markets until they are the primary way that society answers important questions in many fields”.
There isn’t huge effort going into future generations bills from within EA (the most notable post is complaining about them, not advocating them! https://forum.effectivealtruism.org/posts/TSZHvG7eGdmXCGhgS/concerns-with-the-wellbeing-of-future-generations-bill-1 ), although a lot of lefty- and climate-oriented EAs like them. But what I meant by that comment is just that EA has interpreted “improving institutional decisionmaking” to mean seeking influence within existing institutions, while I think there should be a second pillar of the cause area devoted to piloting totally new ideas in governance.
As an example of another idea that I think should get more EA attention and funding, Charter Cities have sometimes received an unduly chilly reception on the Forum (https://forum.effectivealtruism.org/posts/EpaSZWQkAy9apupoD/intervention-report-charter-cities), miscategorized as merely a neartermist economic-growth-boosting intervention, wheras charter city advocates are often most excited about their potential for experimental improvements in governance and leading to more “governance competition” among nations.
It was heartening to see the list of focus areas of the FTX future fund—they seem more interested in institution design and progress-studies-esque ideas than the rest of the EA ecosystem, which I think is great.