A common view is that the timelines to risky AI are largely driven by hardware progress and deep learning progress occurring outside of Open
What’s the justification for this view? It seems like significant deep learning process happens inside of OpenAI.
Many people (both at OpenAI and elsewhere) believe that questions of who builds AI and how are very important relative to acceleration of AI timelines.
If who builds AI is such an important question for OpenAI, then why would they publish capabilities research thus giving up majority of control on who builds AI and how?
At this point I think it’s fairly clear that if OpenAI were focused on making the long-term future good they should not be disclosing or deploying improved systems (and it seems most likely they should not even be developing them), so the main point of debate is exactly how bad it is.
To a layman, It seems like they’re on track to deploy GPT-4 as well as publish all the capabilities research related to that soon. Is there any reason to hope they won’t be doing that?
~1% of people die each year and people might reasonably value the profound suffering that occurs over a single year at 1% of survival).
How is the harm caused by 1% of people dying even remotely equivalent to 1% reduction in survival, even without considering the value lost in the future lightcone?
It seems highly doubtful to me that OpenAI’s dedication to doing and publishing capabilities research is a deliberate choice to accelerate timelines due to their deep philosophical adherence to myopic altruism.
I don’t think they would be doing this if they actually thought they were increasing p(doom) by 1% (which is already an optimistic estimate) per 1 year acceleration of timelines—a much simpler explanation is that they’re at least somewhat longtermist (like most humans) but they don’t really think there’s a significant p(doom) (at least the capabilities researchers and the leadership team).
What’s the justification for this view? It seems like significant deep learning process happens inside of OpenAI.
If who builds AI is such an important question for OpenAI, then why would they publish capabilities research thus giving up majority of control on who builds AI and how?
To a layman, It seems like they’re on track to deploy GPT-4 as well as publish all the capabilities research related to that soon. Is there any reason to hope they won’t be doing that?
How is the harm caused by 1% of people dying even remotely equivalent to 1% reduction in survival, even without considering the value lost in the future lightcone?
It seems highly doubtful to me that OpenAI’s dedication to doing and publishing capabilities research is a deliberate choice to accelerate timelines due to their deep philosophical adherence to myopic altruism.
I don’t think they would be doing this if they actually thought they were increasing p(doom) by 1% (which is already an optimistic estimate) per 1 year acceleration of timelines—a much simpler explanation is that they’re at least somewhat longtermist (like most humans) but they don’t really think there’s a significant p(doom) (at least the capabilities researchers and the leadership team).
I think Paul was speaking in 3rd person for parts of it where you didn’t realize