Alright, it seems to me like the crux between our positions is that you are unwilling or unable to consider whether new institutions could create an environment that is more conducive to technical AI alignment work because you feel that this is a hopeless endeavor. Societies (in your view that seems to be just government) are simply worse at creating new institutions compared to the alternative of letting DeepMind do its thing. Moreover, you don’t seem to acknowledge that it is worthwhile to consider how to avoid the dystopian failure mode because the catastrophic failure mode is simply much larger and more devastating.
If this a reasonable rough summary of your views then I continue to stand my ground because I don’t think it’s all that reasonable and well founded as you make it out to be.
First, as I tried to explain in various comments now, there is no inherent need to put only government in charge of regulation but you still seem to cling to that notion. I also never said that government should do the technical work. This whole proposal is clearly about regulating use cases for AI and the idea that it may be interesting to consider if mandating the sharing of source code and other knowledge assets could help to alleviate race dynamics and create an environment where companies like DeepMind and OpenAI can actually focus on doing what’s necessary to figure out technical alignment issues. You seem to think that this proposal would want to cut them out of the picture… No, it would simply aim to shift their incentives so that they become more aligned with the rest of humanity.
If your argument is that they won’t work on technical alignment if they are not the only “owners” of what they come up with and have a crazy upside in terms of unregulated profit, maybe we should consider whether they are the right people for the job? I mean, much of your own argument rests on the assumption that they will be willing to share at some point. Why should we let them decide what they are willing to share rather than come together to figure this stuff out before the game is done? Do you think this would be so much distraction to their work that just contemplating regulation is a dangerous move? That seems unreasonably naive and short sighted. In the worst case (i.e., all expert organizations defecting from work on alignment), with a powerful regulating body/regime in place, we could still hire individual people as part of a more transparent “Manhattan project” and simply take more time to role out more advanced AI capabilities.
Second, somewhat ironically, you are kind of making the case for some aspects of the proposal when you say:
Race dynamics are mitigated by AI companies joining the leader in the AI space, which is currently DeepMind. OpenAI agrees with “merge and assist” as a late-game strategy. Recent competition among AI firms, primarily in LLMs, is largely sparked by OpenAI (see Claude, Bard, Gemini). DeepMind appeared content to release few products in the absence of substantial competition.
The whole point of the proposal is to argue for figuring out how to merge efforts into a regulated environment. We should not need to trust OpenAI that they will do the right thing when the right time comes. There will be many different opinions what the right thing and when the right time is. Just letting for profit companies merge as they see fit is almost predictably a bad idea and bound to be slow if our current institutions tasked with overseeing merger and acquisitions are involved in processes that they do not understand and have no experience with. Maybe it’s just me but I would like to figure out how society can reasonably deal with such situations before the time comes. Trusting in the competence of DeepMind to figure out those issues seems naive. As you highlighted for-profit companies are good at technical work where incentives are aligned but much less trustworthy when confronted with the challenge of having to figure out reasonable institutions that can control them (e.g., regulatory capture is a thing).
Third, your last statement is confusing to me because I do believe that I asked a sensible question.
Google obviously has no need to sell anything to anyone if they control the world. This sentence is not a logical argument, it is rhetoric.
Do you mean that the winner of the race will be able to use advanced nano manufacturing and other technologies to simply shape the universe in their image and, thus, not require currencies anymore because coordination of resources is not needed? I would contest this idea as not at all obvious because accounting seems to be a pretty fundamental ingredient in supposedly rational decision making (which we kind of expect an superintelligence to implement). Or do you want to imply that they would simply decide against keeping other people around? I think that would qualify as “dystopian” in my book and, hence, kind of support my argument. Thus, I don’t really understand why you seem to claim that my (admittedly poignantly formulated) suggestion that the business model of the winner of the race is likely to shape the future significantly is not worthwhile discussing?
Anyhow, I acknowledge that much of your argument rests on the hope and belief that Google DeepMind et al. are (the only) actors that can be trusted to solve technical alignment in the current regulatory environment and that they will do the right things once they are there. To me that seems more like wishful thinking rather than a well-grounded argument but I also know the situation where I have strong intuitions and gut feelings about what the best course of action may be, so I sympathize with your position to some degree.
My intent is simply to raise some awareness that there are other alternatives beyond the dichotomy between open source and closed source which we can come up with that may help us to create a regulatory environment that is more conducive to realizing our common goals. More than hope is possible if we put our minds and efforts to it.
Alright, it seems to me like the crux between our positions is that you are unwilling or unable to consider whether new institutions could create an environment that is more conducive to technical AI alignment work because you feel that this is a hopeless endeavor. Societies (in your view that seems to be just government) are simply worse at creating new institutions compared to the alternative of letting DeepMind do its thing. Moreover, you don’t seem to acknowledge that it is worthwhile to consider how to avoid the dystopian failure mode because the catastrophic failure mode is simply much larger and more devastating.
If this a reasonable rough summary of your views then I continue to stand my ground because I don’t think it’s all that reasonable and well founded as you make it out to be.
First, as I tried to explain in various comments now, there is no inherent need to put only government in charge of regulation but you still seem to cling to that notion. I also never said that government should do the technical work. This whole proposal is clearly about regulating use cases for AI and the idea that it may be interesting to consider if mandating the sharing of source code and other knowledge assets could help to alleviate race dynamics and create an environment where companies like DeepMind and OpenAI can actually focus on doing what’s necessary to figure out technical alignment issues. You seem to think that this proposal would want to cut them out of the picture… No, it would simply aim to shift their incentives so that they become more aligned with the rest of humanity.
If your argument is that they won’t work on technical alignment if they are not the only “owners” of what they come up with and have a crazy upside in terms of unregulated profit, maybe we should consider whether they are the right people for the job? I mean, much of your own argument rests on the assumption that they will be willing to share at some point. Why should we let them decide what they are willing to share rather than come together to figure this stuff out before the game is done? Do you think this would be so much distraction to their work that just contemplating regulation is a dangerous move? That seems unreasonably naive and short sighted. In the worst case (i.e., all expert organizations defecting from work on alignment), with a powerful regulating body/regime in place, we could still hire individual people as part of a more transparent “Manhattan project” and simply take more time to role out more advanced AI capabilities.
Second, somewhat ironically, you are kind of making the case for some aspects of the proposal when you say:
The whole point of the proposal is to argue for figuring out how to merge efforts into a regulated environment. We should not need to trust OpenAI that they will do the right thing when the right time comes. There will be many different opinions what the right thing and when the right time is. Just letting for profit companies merge as they see fit is almost predictably a bad idea and bound to be slow if our current institutions tasked with overseeing merger and acquisitions are involved in processes that they do not understand and have no experience with. Maybe it’s just me but I would like to figure out how society can reasonably deal with such situations before the time comes. Trusting in the competence of DeepMind to figure out those issues seems naive. As you highlighted for-profit companies are good at technical work where incentives are aligned but much less trustworthy when confronted with the challenge of having to figure out reasonable institutions that can control them (e.g., regulatory capture is a thing).
Third, your last statement is confusing to me because I do believe that I asked a sensible question.
Do you mean that the winner of the race will be able to use advanced nano manufacturing and other technologies to simply shape the universe in their image and, thus, not require currencies anymore because coordination of resources is not needed? I would contest this idea as not at all obvious because accounting seems to be a pretty fundamental ingredient in supposedly rational decision making (which we kind of expect an superintelligence to implement). Or do you want to imply that they would simply decide against keeping other people around? I think that would qualify as “dystopian” in my book and, hence, kind of support my argument. Thus, I don’t really understand why you seem to claim that my (admittedly poignantly formulated) suggestion that the business model of the winner of the race is likely to shape the future significantly is not worthwhile discussing?
Anyhow, I acknowledge that much of your argument rests on the hope and belief that Google DeepMind et al. are (the only) actors that can be trusted to solve technical alignment in the current regulatory environment and that they will do the right things once they are there. To me that seems more like wishful thinking rather than a well-grounded argument but I also know the situation where I have strong intuitions and gut feelings about what the best course of action may be, so I sympathize with your position to some degree.
My intent is simply to raise some awareness that there are other alternatives beyond the dichotomy between open source and closed source which we can come up with that may help us to create a regulatory environment that is more conducive to realizing our common goals. More than hope is possible if we put our minds and efforts to it.