Thank you for taking the time to read and critique this idea. I think this is very important, and I appreciate your thoughtful response.
Regarding how to get current systems to implement/agree to it, I don’t think that will be relevant longterm. The mechanisms current institutions use for control I don’t think can keep up with AI proliferation. I imagine most existing institutions will still exist, but won’t have the capacity to do much once AI really takes off. My guess is, if AI kills us, it will happen after a slow-motion coup. Not any kind of intentional coup by AIs, but from humans just coup’ing themselves because AIs will just be more useful. My idea wouldn’t be removing or replacing any institutions, but they just wouldn’t be extremely relevant to it. Some governments might try to actively ban use of it, but these would probably be fleeting, if the network actually was superior in collective intelligence to any individual AI. If it made work economically more useful for them, they would want to use it. It doesn’t involve removing them, or doing much to directly interfere with things they are doing. Think of it this way, recommendation algorithms on social media have an enormous influence on society, institutions, etc. Some try to ban or control them, but most can still access them if they want to, and no entity really controls them. But no one incorporates the “will of twitter” into their constitution.
The game board isn’t any of the things you mention. All the things you mention I don’t think have the capacity to do much to change the board. The current board is fundamentally adversarial, where interacting with it increases the power of other players. We’ve seen this with OpenAI, Anthropic, etc. The new board would be cooperative, at least at a higher level. How do we make the new board more useful than the current one? My best guess would be economic advantage of decentralized compute. We’ve seen how fast the OpenSource community has been able to make progress. And we’ve seen how a huge amount of compute gets used doing things like mining bitcoin, even though the compute is wasted on solving math puzzles. Contributing decentralized compute to a collective network could actually have economic value, and I imagine this will happen one way or another, but my concern is it’ll end up being for the worse if people aren’t actively trying to create a better system. A decentralized network with no safeguards would probably be much worse than anything a major AI company could create.
“But wouldn’t the market be distorted by the fact that if everyone ends up dead, there is nobody left alive to collect their prediction-market winnings?”
This seems to be going back to the “one critical shot” approach which I think is a terrible idea that won’t possibly work in the real world under any circumstances. This would be a progression overtime, not a case where an AI goes supernova overnight. This might require slower takeoffs, or at least no foom scenarios. Making a new board that isn’t adversarial might mitigate the potential of foom. What I proposed was my first naive approach, and I’ve since thought that maybe it’s the collective intelligence of the system that should be increasing, not a singleton AI being trained at the center. Most members in that collective intelligence would initially be humans, and slowly more and more AIs would be a more and more powerful part of the system. I’m not sure here, though. Maybe there’s some third option where there’s a foundational model at the lowest layer of the network, but it isn’t a singular AI in the normal sense. I imagine a singular AI at the center could give rise to agency, and probably break the whole thing.
“It seems to me that having a prediction market for different alignment approaches would be helpful, but would be VERY far from actually having a good plan to solve alignment.”
I agree here. They’d only be good at maybe predicting the next iteration of progress, not a fully scalable solution.
“I feel like we share many of the same sentiments—the idea that we could improve the general level of societal / governmental decision-making using innovative ideas like better forms of voting, quadratic voting & funding, prediction markets, etc”
This would be great, but my guess is they would probably progress too slowly to be useful. Mechanism design that has to deal with currently existing institutions I don’t think will happen quickly enough. Technically-enforced design might.
I love the idea of shovel-ready strategies, and think we need to be prepared in the event of a crisis. My issue is even most good strategies seem to just deal with large companies, and don’t know how to deal with the likelihood that such power will fall into more and more actors.
[crossposting my reply]
Thank you for taking the time to read and critique this idea. I think this is very important, and I appreciate your thoughtful response.
Regarding how to get current systems to implement/agree to it, I don’t think that will be relevant longterm. The mechanisms current institutions use for control I don’t think can keep up with AI proliferation. I imagine most existing institutions will still exist, but won’t have the capacity to do much once AI really takes off. My guess is, if AI kills us, it will happen after a slow-motion coup. Not any kind of intentional coup by AIs, but from humans just coup’ing themselves because AIs will just be more useful. My idea wouldn’t be removing or replacing any institutions, but they just wouldn’t be extremely relevant to it. Some governments might try to actively ban use of it, but these would probably be fleeting, if the network actually was superior in collective intelligence to any individual AI. If it made work economically more useful for them, they would want to use it. It doesn’t involve removing them, or doing much to directly interfere with things they are doing. Think of it this way, recommendation algorithms on social media have an enormous influence on society, institutions, etc. Some try to ban or control them, but most can still access them if they want to, and no entity really controls them. But no one incorporates the “will of twitter” into their constitution.
The game board isn’t any of the things you mention. All the things you mention I don’t think have the capacity to do much to change the board. The current board is fundamentally adversarial, where interacting with it increases the power of other players. We’ve seen this with OpenAI, Anthropic, etc. The new board would be cooperative, at least at a higher level. How do we make the new board more useful than the current one? My best guess would be economic advantage of decentralized compute. We’ve seen how fast the OpenSource community has been able to make progress. And we’ve seen how a huge amount of compute gets used doing things like mining bitcoin, even though the compute is wasted on solving math puzzles. Contributing decentralized compute to a collective network could actually have economic value, and I imagine this will happen one way or another, but my concern is it’ll end up being for the worse if people aren’t actively trying to create a better system. A decentralized network with no safeguards would probably be much worse than anything a major AI company could create.
“But wouldn’t the market be distorted by the fact that if everyone ends up dead, there is nobody left alive to collect their prediction-market winnings?”
This seems to be going back to the “one critical shot” approach which I think is a terrible idea that won’t possibly work in the real world under any circumstances. This would be a progression overtime, not a case where an AI goes supernova overnight. This might require slower takeoffs, or at least no foom scenarios. Making a new board that isn’t adversarial might mitigate the potential of foom. What I proposed was my first naive approach, and I’ve since thought that maybe it’s the collective intelligence of the system that should be increasing, not a singleton AI being trained at the center. Most members in that collective intelligence would initially be humans, and slowly more and more AIs would be a more and more powerful part of the system. I’m not sure here, though. Maybe there’s some third option where there’s a foundational model at the lowest layer of the network, but it isn’t a singular AI in the normal sense. I imagine a singular AI at the center could give rise to agency, and probably break the whole thing.
“It seems to me that having a prediction market for different alignment approaches would be helpful, but would be VERY far from actually having a good plan to solve alignment.”
I agree here. They’d only be good at maybe predicting the next iteration of progress, not a fully scalable solution.
“I feel like we share many of the same sentiments—the idea that we could improve the general level of societal / governmental decision-making using innovative ideas like better forms of voting, quadratic voting & funding, prediction markets, etc”
This would be great, but my guess is they would probably progress too slowly to be useful. Mechanism design that has to deal with currently existing institutions I don’t think will happen quickly enough. Technically-enforced design might.
I love the idea of shovel-ready strategies, and think we need to be prepared in the event of a crisis. My issue is even most good strategies seem to just deal with large companies, and don’t know how to deal with the likelihood that such power will fall into more and more actors.