I don’t know what you mean by skipped. Here’s some more concreteness though:
--Thanks to OpenAI, there is more of an “AI research should be made available to everyone” ethos, more of a “Boo anyone who does AI research and doesn’t tell the world what they did or how they did it or even decides not to share the weights!” Insofar as this ethos persists during the crucial period, whichever labs are building AGI will be under more internal and external pressure to publish/share. This makes it harder for them to go slow and be cautious when the stakes are high.
--Thanks to OpenAI, there were two world-leading AGI labs, not one. Obviously it’s a lot harder to coordinate two than one. This is not as bad as it sounds because plausibly before the crucial period more AGI labs would have appeared anyway. But still.
--Thanks to OpenAI, scaling laws and GPT tech are public knowledge now. This is a pretty big deal because it’s motivating lots of other players to start building AGI or AGI-like things, and because it seems to be opening up lots of new profit opportunities for the AI industry which will encourage further investment, shortening timelines and increasing the number of actors that need to coordinate. Again, presumably this would have happened eventually anyway. But OpenAI made it happen faster.
Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.
Yep! :) But the damage is done; thanks to OpenAI there is now a large(r) group of people who believe in FOSS AI than there otherwise would be, and there are various new actors who have entered the race who wouldn’t have if not for OpenAI publications.
To be clear, I’m not confident FOSS AI is bad. I just think it probably is, for reasons mentioned. I won’t be too surprised if I turn out to be wrong and actually (e.g. for reasons Dirichlet-to-Neumann mentioned, or because FOSS AI is harder to make a profit on and therefore will get less investment and therefore will be developed later, buying more time for safety research) the FOSS AI ethos was net-positive.
I’d be interested to hear your perspective on FOSS AI.
Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.
I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Thanks! In case you are interested in my opinion: I think I agree with 1 (but I expect us to fail miserably) and 2 and 6. I disagree with 3 (AGI, unlike nukes, can be used in ways that aren’t threatening and don’t hurt anyone and just make loads of money and save loads of lives. So people will find it hard to resist using it. So the more people have access to it, the more likely it is it’ll be used before it is safe.) My timelines are about 50% by 10 years; I gather from point 4 that yours are longer. I think 5 might be true but might not be; history is full of examples of different groups fighting each other yet still managing to conquer and crush some third group. For example, the Spanish conquistadors were fighting each other even as they conquered Mexico and Peru. Maybe humans will be clever enough to play the AIs off against each other in a way that lets us maintain control until we solve alignment… but I wouldn’t bet on it.
It seems to me that being open about what you are working on, and having a proven record of publishing/sharing critical informations, including weights, is a very good way to fight the arm race.
If you don’t know where your concurrent are, it is much more difficult to stop to think about alignment than to rush toward capacity first. If you know where your concurrent are, and if you know that you will be at worst a couple weeks or months late because they always publish and you will thus be able to catch up, you have much more slack to pursue alignment (or speculative research in general).
For the strategic arms reduction treaties signed between Russia and the USA, verification tools were a crucial part of the process, because you need to know what the other is doing to disarm: https://en.wikipedia.org/wiki/START_I#Verification_tools
Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won’t be a good idea for the leading contenders to publish publicly, because then there’ll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.
Trust between partners do not happen overnight. You don’t suddenly begin sharing information with concurrents when the prize is in sight. We need a history of shared information to build upon, and now—when, as you said, AGI is not really close—is the good time to build it. Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Choosing to not release GPT-3′s weights to the whole world doesn’t imply that you don’t trust DeepMind or Anthropic or whoever. It just implies that there exists at least one person in the world you don’t trust.
I agree that releasing everything publicly would make it easier/more likely to release crucial things to key competitors when the time comes. Alas, the harms are big enough to outweigh this benefit, I think.
I don’t know what you mean by skipped. Here’s some more concreteness though:
--Thanks to OpenAI, there is more of an “AI research should be made available to everyone” ethos, more of a “Boo anyone who does AI research and doesn’t tell the world what they did or how they did it or even decides not to share the weights!” Insofar as this ethos persists during the crucial period, whichever labs are building AGI will be under more internal and external pressure to publish/share. This makes it harder for them to go slow and be cautious when the stakes are high.
--Thanks to OpenAI, there were two world-leading AGI labs, not one. Obviously it’s a lot harder to coordinate two than one. This is not as bad as it sounds because plausibly before the crucial period more AGI labs would have appeared anyway. But still.
--Thanks to OpenAI, scaling laws and GPT tech are public knowledge now. This is a pretty big deal because it’s motivating lots of other players to start building AGI or AGI-like things, and because it seems to be opening up lots of new profit opportunities for the AI industry which will encourage further investment, shortening timelines and increasing the number of actors that need to coordinate. Again, presumably this would have happened eventually anyway. But OpenAI made it happen faster.
Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.
Yep! :) But the damage is done; thanks to OpenAI there is now a large(r) group of people who believe in FOSS AI than there otherwise would be, and there are various new actors who have entered the race who wouldn’t have if not for OpenAI publications.
To be clear, I’m not confident FOSS AI is bad. I just think it probably is, for reasons mentioned. I won’t be too surprised if I turn out to be wrong and actually (e.g. for reasons Dirichlet-to-Neumann mentioned, or because FOSS AI is harder to make a profit on and therefore will get less investment and therefore will be developed later, buying more time for safety research) the FOSS AI ethos was net-positive.
I’d be interested to hear your perspective on FOSS AI.
Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.
I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Thanks! In case you are interested in my opinion: I think I agree with 1 (but I expect us to fail miserably) and 2 and 6. I disagree with 3 (AGI, unlike nukes, can be used in ways that aren’t threatening and don’t hurt anyone and just make loads of money and save loads of lives. So people will find it hard to resist using it. So the more people have access to it, the more likely it is it’ll be used before it is safe.) My timelines are about 50% by 10 years; I gather from point 4 that yours are longer. I think 5 might be true but might not be; history is full of examples of different groups fighting each other yet still managing to conquer and crush some third group. For example, the Spanish conquistadors were fighting each other even as they conquered Mexico and Peru. Maybe humans will be clever enough to play the AIs off against each other in a way that lets us maintain control until we solve alignment… but I wouldn’t bet on it.
It seems to me that being open about what you are working on, and having a proven record of publishing/sharing critical informations, including weights, is a very good way to fight the arm race.
If you don’t know where your concurrent are, it is much more difficult to stop to think about alignment than to rush toward capacity first. If you know where your concurrent are, and if you know that you will be at worst a couple weeks or months late because they always publish and you will thus be able to catch up, you have much more slack to pursue alignment (or speculative research in general).
For the strategic arms reduction treaties signed between Russia and the USA, verification tools were a crucial part of the process, because you need to know what the other is doing to disarm:
https://en.wikipedia.org/wiki/START_I#Verification_tools
https://en.wikipedia.org/wiki/New_START#Monitoring_and_verification
Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won’t be a good idea for the leading contenders to publish publicly, because then there’ll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.
Trust between partners do not happen overnight. You don’t suddenly begin sharing information with concurrents when the prize is in sight. We need a history of shared information to build upon, and now—when, as you said, AGI is not really close—is the good time to build it.
Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Choosing to not release GPT-3′s weights to the whole world doesn’t imply that you don’t trust DeepMind or Anthropic or whoever. It just implies that there exists at least one person in the world you don’t trust.
I agree that releasing everything publicly would make it easier/more likely to release crucial things to key competitors when the time comes. Alas, the harms are big enough to outweigh this benefit, I think.