Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.
Yep! :) But the damage is done; thanks to OpenAI there is now a large(r) group of people who believe in FOSS AI than there otherwise would be, and there are various new actors who have entered the race who wouldn’t have if not for OpenAI publications.
To be clear, I’m not confident FOSS AI is bad. I just think it probably is, for reasons mentioned. I won’t be too surprised if I turn out to be wrong and actually (e.g. for reasons Dirichlet-to-Neumann mentioned, or because FOSS AI is harder to make a profit on and therefore will get less investment and therefore will be developed later, buying more time for safety research) the FOSS AI ethos was net-positive.
I’d be interested to hear your perspective on FOSS AI.
Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.
I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Thanks! In case you are interested in my opinion: I think I agree with 1 (but I expect us to fail miserably) and 2 and 6. I disagree with 3 (AGI, unlike nukes, can be used in ways that aren’t threatening and don’t hurt anyone and just make loads of money and save loads of lives. So people will find it hard to resist using it. So the more people have access to it, the more likely it is it’ll be used before it is safe.) My timelines are about 50% by 10 years; I gather from point 4 that yours are longer. I think 5 might be true but might not be; history is full of examples of different groups fighting each other yet still managing to conquer and crush some third group. For example, the Spanish conquistadors were fighting each other even as they conquered Mexico and Peru. Maybe humans will be clever enough to play the AIs off against each other in a way that lets us maintain control until we solve alignment… but I wouldn’t bet on it.
Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.
Yep! :) But the damage is done; thanks to OpenAI there is now a large(r) group of people who believe in FOSS AI than there otherwise would be, and there are various new actors who have entered the race who wouldn’t have if not for OpenAI publications.
To be clear, I’m not confident FOSS AI is bad. I just think it probably is, for reasons mentioned. I won’t be too surprised if I turn out to be wrong and actually (e.g. for reasons Dirichlet-to-Neumann mentioned, or because FOSS AI is harder to make a profit on and therefore will get less investment and therefore will be developed later, buying more time for safety research) the FOSS AI ethos was net-positive.
I’d be interested to hear your perspective on FOSS AI.
Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.
I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Thanks! In case you are interested in my opinion: I think I agree with 1 (but I expect us to fail miserably) and 2 and 6. I disagree with 3 (AGI, unlike nukes, can be used in ways that aren’t threatening and don’t hurt anyone and just make loads of money and save loads of lives. So people will find it hard to resist using it. So the more people have access to it, the more likely it is it’ll be used before it is safe.) My timelines are about 50% by 10 years; I gather from point 4 that yours are longer. I think 5 might be true but might not be; history is full of examples of different groups fighting each other yet still managing to conquer and crush some third group. For example, the Spanish conquistadors were fighting each other even as they conquered Mexico and Peru. Maybe humans will be clever enough to play the AIs off against each other in a way that lets us maintain control until we solve alignment… but I wouldn’t bet on it.