I don’t see how this proposal substantially differs from nationalizing all AI work. Governments can have internal departments that compete, like corporations can. Removing the profit motive seems to only leave government and nonprofit as possibilities, and the required relationships with regulators make this more like a government project for practical purposes.
Without specifying more about the governance structure, this is basically nationalization. It is understandable that people would oppose nationalization both due to historical bad results of communist and fascist systems of government, and due to specific reasons why current governments are untrustworthy, such as handling of COVID, lab leak, etc recently.
In an American context, it makes very little sense to propose government expansion (as opposed to specific, universally-applied laws) without coming to terms with the ways that government has shown itself to be untrustworthy for handling catastrophic risks.
I think, if centralization is beneficial, it is wiser to centralize around DeepMind than any government. But again, the justification for this proposal seems spurious if it is in effect tending towards more centralization than default closed source AI.
So, I concede that the proposal is pretty vague and general and that this may make it difficult to get the gist of it but I think it’s still pretty clear that the idea is broader than nationalizing. I refer specifically to the possible involvement of intergovernmental, professional, or civil society organizations in the regulating body. With regards to profit, the degree to which profit is allowed for could be regulated for each use case separately with some (maybe the more benign) use cases being more tailored to profit seeking companies than others.
Nevertheless, I agree that for a in-depth discussion of pros and cons more details regarding a possible governance structure would be desirable. That’s the whole point of the post, we should start thinking about what governance structures we would actually want to have in place rather than assuming that it must be “closed source” or “open source”. I don’t have the answer but I advocate for engaging with an important question.
I completely disagree with the sentiment of the rest of your comment that “hands on regulation” is dead in the water because government is incompetent and that hoping for Google DeepMind or OpenAI to do the right things is the best way forward.
First, as I already highlighted above, nothing in this approach says that government alone must be the ones calling the shots. It may not be easy but it also seems entirely possible to come up with new and creative institutions that are able to regulate AI similar to how we are regulating companies, cars, aviation, or nuclear technology. Each of the existing institutions may have faults but we can learn from them, experiment with new ones (e.g., citizen assembly) and continuously improve (e.g., see the work on digital democracy in Taiwan). We must if we want to avoid both failure modes outlined in the post.
Second, I am surprised by the sanguine tone with regards to trusting for profit institutions. What is your argument that this will work out ok for all of us in the end? Even if we don’t die due to corner cutting because of race dynamics, how do we ensure that we don’t end up in a dystopia where Google controls the world and sells us to the highest bidder? I mean that’s their actual business model after all, right?
Getting AI right is mainly a matter of technical competence and technical management competence. DeepMind is obviously much better at those than any government, especially in the AI domain. The standard AI risk threat is not that some company aligns AI to its own values, it’s that everyone dies because AI is not aligned to anyone’s values, because this is a technically hard problem, as has been argued on this website and in other writing extensively. If Google successfully allocates 99% of the universe to itself and its employees and their families and 1% to the rest of the people in the world, that is SO good for everyone’s values compared with the default trajectory, due to a combination of default low chance of alignment, diminishing marginal utility in personal values, and similarity of impersonal values across humans.
If a government were to nationalize AI development, I would think that the NSA was the best choice due to their technical competence, although they aren’t specialized in AI, so this would still be worse than DeepMind. DeepMind founder Shane Legg has great respect for Yudkowsky’s alignment work.
Race dynamics are mitigated by AI companies joining the leader in the AI space, which is currently DeepMind. OpenAI agrees with “merge and assist” as a late-game strategy. Recent competition among AI firms, primarily in LLMs, is largely sparked by OpenAI (see Claude, Bard, Gemini). DeepMind appeared content to release few products in the absence of substantial competition.
Even if we don’t die due to corner cutting because of race dynamics, how do we ensure that we don’t end up in a dystopia where Google controls the world and sells us to the highest bidder?
Google obviously has no need to sell anything to anyone if they control the world. This sentence is not a logical argument, it is rhetoric.
Alright, it seems to me like the crux between our positions is that you are unwilling or unable to consider whether new institutions could create an environment that is more conducive to technical AI alignment work because you feel that this is a hopeless endeavor. Societies (in your view that seems to be just government) are simply worse at creating new institutions compared to the alternative of letting DeepMind do its thing. Moreover, you don’t seem to acknowledge that it is worthwhile to consider how to avoid the dystopian failure mode because the catastrophic failure mode is simply much larger and more devastating.
If this a reasonable rough summary of your views then I continue to stand my ground because I don’t think it’s all that reasonable and well founded as you make it out to be.
First, as I tried to explain in various comments now, there is no inherent need to put only government in charge of regulation but you still seem to cling to that notion. I also never said that government should do the technical work. This whole proposal is clearly about regulating use cases for AI and the idea that it may be interesting to consider if mandating the sharing of source code and other knowledge assets could help to alleviate race dynamics and create an environment where companies like DeepMind and OpenAI can actually focus on doing what’s necessary to figure out technical alignment issues. You seem to think that this proposal would want to cut them out of the picture… No, it would simply aim to shift their incentives so that they become more aligned with the rest of humanity.
If your argument is that they won’t work on technical alignment if they are not the only “owners” of what they come up with and have a crazy upside in terms of unregulated profit, maybe we should consider whether they are the right people for the job? I mean, much of your own argument rests on the assumption that they will be willing to share at some point. Why should we let them decide what they are willing to share rather than come together to figure this stuff out before the game is done? Do you think this would be so much distraction to their work that just contemplating regulation is a dangerous move? That seems unreasonably naive and short sighted. In the worst case (i.e., all expert organizations defecting from work on alignment), with a powerful regulating body/regime in place, we could still hire individual people as part of a more transparent “Manhattan project” and simply take more time to role out more advanced AI capabilities.
Second, somewhat ironically, you are kind of making the case for some aspects of the proposal when you say:
Race dynamics are mitigated by AI companies joining the leader in the AI space, which is currently DeepMind. OpenAI agrees with “merge and assist” as a late-game strategy. Recent competition among AI firms, primarily in LLMs, is largely sparked by OpenAI (see Claude, Bard, Gemini). DeepMind appeared content to release few products in the absence of substantial competition.
The whole point of the proposal is to argue for figuring out how to merge efforts into a regulated environment. We should not need to trust OpenAI that they will do the right thing when the right time comes. There will be many different opinions what the right thing and when the right time is. Just letting for profit companies merge as they see fit is almost predictably a bad idea and bound to be slow if our current institutions tasked with overseeing merger and acquisitions are involved in processes that they do not understand and have no experience with. Maybe it’s just me but I would like to figure out how society can reasonably deal with such situations before the time comes. Trusting in the competence of DeepMind to figure out those issues seems naive. As you highlighted for-profit companies are good at technical work where incentives are aligned but much less trustworthy when confronted with the challenge of having to figure out reasonable institutions that can control them (e.g., regulatory capture is a thing).
Third, your last statement is confusing to me because I do believe that I asked a sensible question.
Google obviously has no need to sell anything to anyone if they control the world. This sentence is not a logical argument, it is rhetoric.
Do you mean that the winner of the race will be able to use advanced nano manufacturing and other technologies to simply shape the universe in their image and, thus, not require currencies anymore because coordination of resources is not needed? I would contest this idea as not at all obvious because accounting seems to be a pretty fundamental ingredient in supposedly rational decision making (which we kind of expect an superintelligence to implement). Or do you want to imply that they would simply decide against keeping other people around? I think that would qualify as “dystopian” in my book and, hence, kind of support my argument. Thus, I don’t really understand why you seem to claim that my (admittedly poignantly formulated) suggestion that the business model of the winner of the race is likely to shape the future significantly is not worthwhile discussing?
Anyhow, I acknowledge that much of your argument rests on the hope and belief that Google DeepMind et al. are (the only) actors that can be trusted to solve technical alignment in the current regulatory environment and that they will do the right things once they are there. To me that seems more like wishful thinking rather than a well-grounded argument but I also know the situation where I have strong intuitions and gut feelings about what the best course of action may be, so I sympathize with your position to some degree.
My intent is simply to raise some awareness that there are other alternatives beyond the dichotomy between open source and closed source which we can come up with that may help us to create a regulatory environment that is more conducive to realizing our common goals. More than hope is possible if we put our minds and efforts to it.
I don’t see how this proposal substantially differs from nationalizing all AI work. Governments can have internal departments that compete, like corporations can. Removing the profit motive seems to only leave government and nonprofit as possibilities, and the required relationships with regulators make this more like a government project for practical purposes.
Without specifying more about the governance structure, this is basically nationalization. It is understandable that people would oppose nationalization both due to historical bad results of communist and fascist systems of government, and due to specific reasons why current governments are untrustworthy, such as handling of COVID, lab leak, etc recently.
In an American context, it makes very little sense to propose government expansion (as opposed to specific, universally-applied laws) without coming to terms with the ways that government has shown itself to be untrustworthy for handling catastrophic risks.
I think, if centralization is beneficial, it is wiser to centralize around DeepMind than any government. But again, the justification for this proposal seems spurious if it is in effect tending towards more centralization than default closed source AI.
So, I concede that the proposal is pretty vague and general and that this may make it difficult to get the gist of it but I think it’s still pretty clear that the idea is broader than nationalizing. I refer specifically to the possible involvement of intergovernmental, professional, or civil society organizations in the regulating body. With regards to profit, the degree to which profit is allowed for could be regulated for each use case separately with some (maybe the more benign) use cases being more tailored to profit seeking companies than others.
Nevertheless, I agree that for a in-depth discussion of pros and cons more details regarding a possible governance structure would be desirable. That’s the whole point of the post, we should start thinking about what governance structures we would actually want to have in place rather than assuming that it must be “closed source” or “open source”. I don’t have the answer but I advocate for engaging with an important question.
I completely disagree with the sentiment of the rest of your comment that “hands on regulation” is dead in the water because government is incompetent and that hoping for Google DeepMind or OpenAI to do the right things is the best way forward.
First, as I already highlighted above, nothing in this approach says that government alone must be the ones calling the shots. It may not be easy but it also seems entirely possible to come up with new and creative institutions that are able to regulate AI similar to how we are regulating companies, cars, aviation, or nuclear technology. Each of the existing institutions may have faults but we can learn from them, experiment with new ones (e.g., citizen assembly) and continuously improve (e.g., see the work on digital democracy in Taiwan). We must if we want to avoid both failure modes outlined in the post.
Second, I am surprised by the sanguine tone with regards to trusting for profit institutions. What is your argument that this will work out ok for all of us in the end? Even if we don’t die due to corner cutting because of race dynamics, how do we ensure that we don’t end up in a dystopia where Google controls the world and sells us to the highest bidder? I mean that’s their actual business model after all, right?
Getting AI right is mainly a matter of technical competence and technical management competence. DeepMind is obviously much better at those than any government, especially in the AI domain. The standard AI risk threat is not that some company aligns AI to its own values, it’s that everyone dies because AI is not aligned to anyone’s values, because this is a technically hard problem, as has been argued on this website and in other writing extensively. If Google successfully allocates 99% of the universe to itself and its employees and their families and 1% to the rest of the people in the world, that is SO good for everyone’s values compared with the default trajectory, due to a combination of default low chance of alignment, diminishing marginal utility in personal values, and similarity of impersonal values across humans.
If a government were to nationalize AI development, I would think that the NSA was the best choice due to their technical competence, although they aren’t specialized in AI, so this would still be worse than DeepMind. DeepMind founder Shane Legg has great respect for Yudkowsky’s alignment work.
Race dynamics are mitigated by AI companies joining the leader in the AI space, which is currently DeepMind. OpenAI agrees with “merge and assist” as a late-game strategy. Recent competition among AI firms, primarily in LLMs, is largely sparked by OpenAI (see Claude, Bard, Gemini). DeepMind appeared content to release few products in the absence of substantial competition.
Google obviously has no need to sell anything to anyone if they control the world. This sentence is not a logical argument, it is rhetoric.
Alright, it seems to me like the crux between our positions is that you are unwilling or unable to consider whether new institutions could create an environment that is more conducive to technical AI alignment work because you feel that this is a hopeless endeavor. Societies (in your view that seems to be just government) are simply worse at creating new institutions compared to the alternative of letting DeepMind do its thing. Moreover, you don’t seem to acknowledge that it is worthwhile to consider how to avoid the dystopian failure mode because the catastrophic failure mode is simply much larger and more devastating.
If this a reasonable rough summary of your views then I continue to stand my ground because I don’t think it’s all that reasonable and well founded as you make it out to be.
First, as I tried to explain in various comments now, there is no inherent need to put only government in charge of regulation but you still seem to cling to that notion. I also never said that government should do the technical work. This whole proposal is clearly about regulating use cases for AI and the idea that it may be interesting to consider if mandating the sharing of source code and other knowledge assets could help to alleviate race dynamics and create an environment where companies like DeepMind and OpenAI can actually focus on doing what’s necessary to figure out technical alignment issues. You seem to think that this proposal would want to cut them out of the picture… No, it would simply aim to shift their incentives so that they become more aligned with the rest of humanity.
If your argument is that they won’t work on technical alignment if they are not the only “owners” of what they come up with and have a crazy upside in terms of unregulated profit, maybe we should consider whether they are the right people for the job? I mean, much of your own argument rests on the assumption that they will be willing to share at some point. Why should we let them decide what they are willing to share rather than come together to figure this stuff out before the game is done? Do you think this would be so much distraction to their work that just contemplating regulation is a dangerous move? That seems unreasonably naive and short sighted. In the worst case (i.e., all expert organizations defecting from work on alignment), with a powerful regulating body/regime in place, we could still hire individual people as part of a more transparent “Manhattan project” and simply take more time to role out more advanced AI capabilities.
Second, somewhat ironically, you are kind of making the case for some aspects of the proposal when you say:
The whole point of the proposal is to argue for figuring out how to merge efforts into a regulated environment. We should not need to trust OpenAI that they will do the right thing when the right time comes. There will be many different opinions what the right thing and when the right time is. Just letting for profit companies merge as they see fit is almost predictably a bad idea and bound to be slow if our current institutions tasked with overseeing merger and acquisitions are involved in processes that they do not understand and have no experience with. Maybe it’s just me but I would like to figure out how society can reasonably deal with such situations before the time comes. Trusting in the competence of DeepMind to figure out those issues seems naive. As you highlighted for-profit companies are good at technical work where incentives are aligned but much less trustworthy when confronted with the challenge of having to figure out reasonable institutions that can control them (e.g., regulatory capture is a thing).
Third, your last statement is confusing to me because I do believe that I asked a sensible question.
Do you mean that the winner of the race will be able to use advanced nano manufacturing and other technologies to simply shape the universe in their image and, thus, not require currencies anymore because coordination of resources is not needed? I would contest this idea as not at all obvious because accounting seems to be a pretty fundamental ingredient in supposedly rational decision making (which we kind of expect an superintelligence to implement). Or do you want to imply that they would simply decide against keeping other people around? I think that would qualify as “dystopian” in my book and, hence, kind of support my argument. Thus, I don’t really understand why you seem to claim that my (admittedly poignantly formulated) suggestion that the business model of the winner of the race is likely to shape the future significantly is not worthwhile discussing?
Anyhow, I acknowledge that much of your argument rests on the hope and belief that Google DeepMind et al. are (the only) actors that can be trusted to solve technical alignment in the current regulatory environment and that they will do the right things once they are there. To me that seems more like wishful thinking rather than a well-grounded argument but I also know the situation where I have strong intuitions and gut feelings about what the best course of action may be, so I sympathize with your position to some degree.
My intent is simply to raise some awareness that there are other alternatives beyond the dichotomy between open source and closed source which we can come up with that may help us to create a regulatory environment that is more conducive to realizing our common goals. More than hope is possible if we put our minds and efforts to it.