AlphaGo went from mediocre, to going toe-to-toe with the top human Go players in a very short span of time. And now AlphaGo Zero has beaten AlphaGo 100-0. AlphaFold has arguably made a similar logistic jump in protein folding
Do you know how many additional resources this required?
Cost of compute has been decreasing at exponential rate for decades, this has meant entire classes of algorithms which straightforward scale with compute also have become exponentially more capable, and this has already had profound impact on our world. At the very least, you need to show why doubling compute speed, or paying for 10x more GPUs, does not lead to a doubling of capability for the kinds of AI we care about.
Maybe this is the core assumption that differentiates our views. I think that the “exponential growth” in compute is largely the result of being on the steepest sloped point of a sigmoid rather than on a true exponential. For example Dennard scaling ceased around 2007 and Moore’s law has been slowing over the last decade. I’m willing to conceed that if compute grows exponentially indefinitely then AI risk is plausible, but I don’t see any way that will happen.
Do you know how many additional resources this required?
Maybe this is the core assumption that differentiates our views. I think that the “exponential growth” in compute is largely the result of being on the steepest sloped point of a sigmoid rather than on a true exponential. For example Dennard scaling ceased around 2007 and Moore’s law has been slowing over the last decade. I’m willing to conceed that if compute grows exponentially indefinitely then AI risk is plausible, but I don’t see any way that will happen.