This article talks a lot about risks from AI. I wish the author would be more specific what kinds of risks they are thinking about. For example, it is unclear which parts are motivated by extinction risks or not. The same goes for the benefits of open-sourcing these models.
(note: I haven’t read the reports this article is based on, these might have been more specific)
Thanks for this comment. I agree there is some ambiguity here on the types of risks that are being considered with respect to the question of open-sourcing foundation models. I believe the report favors the term “extreme risks” which is defined as “risk of significant physical harm or disruption to key societal functions.” I believe they avoid the terms of “extinction risk” and “existential risk,” but are implying something not too different with their choice of extreme risks.
For me, I pose the question above as:
“How large are the risks from fully open-sourced foundation models? More specifically, how significant are these risks compared to the overall risks inherent in the development and deployment of foundation models?”
What I’m looking for is something like “total risk” versus “total benefit.” In other words, if we take all the risks together, just how large are they in this context? In part I’m not sure if the more extreme risks really come from open sourcing the models or simply from the development and deployment of increasingly capable foundation models.
This article talks a lot about risks from AI. I wish the author would be more specific what kinds of risks they are thinking about. For example, it is unclear which parts are motivated by extinction risks or not. The same goes for the benefits of open-sourcing these models. (note: I haven’t read the reports this article is based on, these might have been more specific)
Thanks for this comment. I agree there is some ambiguity here on the types of risks that are being considered with respect to the question of open-sourcing foundation models. I believe the report favors the term “extreme risks” which is defined as “risk of significant physical harm or disruption to key societal functions.” I believe they avoid the terms of “extinction risk” and “existential risk,” but are implying something not too different with their choice of extreme risks.
For me, I pose the question above as:
What I’m looking for is something like “total risk” versus “total benefit.” In other words, if we take all the risks together, just how large are they in this context? In part I’m not sure if the more extreme risks really come from open sourcing the models or simply from the development and deployment of increasingly capable foundation models.
I hope this helps clarify!