Do prosaic “bandaid” solutions like this appeal to you? (why/why not?)
Relate your answer to the previous question to your perspective on takeoff probabilities (if possible, I would be very interested in hearing actual numbers for the likelihood of certain takeoff scenarios).
I think we need both. I think if we have only bandaid solutions, then we’ve only bought ourselves a few years delay in doom. I think if we don’t have bandaid solutions, we won’t buy ourselves enough time for the more robust solutions to have a chance of coming up with workable practical solutions.
Taking as a premise that sometime in the next 4 years some AI lab / research group makes sufficient advances to begin using near-current-level AI to substantially speed up their development of future generations of AI. Timeframes given are measured from start of this process of heavily AI-supported model development. I’m about 90% confident that this premise will be the case.
I think a hard take-off in the course of < 3 months which goes from near-current-level AI to overwhelmingly superhumanly powerful AGI is highly implausible. I’d personally assign a less than 1% chance to that.
I think a medium-soft take-off of near-current-level AI to 100x more powerful AGI over the course of > 3 months but < 18 months is highly plausible and fairly likely unless substantial regulatory efforts are made to prevent this. I’d give something like 60% chance of this happening whether or not an attempt at regulation is made.
I think soft take-off of near-current-level AI to 100x more powerful AGI over the course of > 18 months but < 5 years as taking up most of the rest of my probability in this space. I’d say that I think this is likely to be the shape of the improvement curve only in the face of successful regulation to slow down the process. I’d give that around 30% probability.
The remaining probability is in ‘I dunno, something really weird happens, predicting the future is hard.’
I think we need both. I think if we have only bandaid solutions, then we’ve only bought ourselves a few years delay in doom. I think if we don’t have bandaid solutions, we won’t buy ourselves enough time for the more robust solutions to have a chance of coming up with workable practical solutions.
Taking as a premise that sometime in the next 4 years some AI lab / research group makes sufficient advances to begin using near-current-level AI to substantially speed up their development of future generations of AI. Timeframes given are measured from start of this process of heavily AI-supported model development. I’m about 90% confident that this premise will be the case.
I think a hard take-off in the course of < 3 months which goes from near-current-level AI to overwhelmingly superhumanly powerful AGI is highly implausible. I’d personally assign a less than 1% chance to that.
I think a medium-soft take-off of near-current-level AI to 100x more powerful AGI over the course of > 3 months but < 18 months is highly plausible and fairly likely unless substantial regulatory efforts are made to prevent this. I’d give something like 60% chance of this happening whether or not an attempt at regulation is made.
I think soft take-off of near-current-level AI to 100x more powerful AGI over the course of > 18 months but < 5 years as taking up most of the rest of my probability in this space. I’d say that I think this is likely to be the shape of the improvement curve only in the face of successful regulation to slow down the process. I’d give that around 30% probability.
The remaining probability is in ‘I dunno, something really weird happens, predicting the future is hard.’