I don’t know the exact dates, but: a)proof-based methods seem to be receiving a lot of attention b) def/acc is becoming more of a thing c) more focus on concentration of power risk (tbh, while there are real risks here, I suspect most work here is net-negative)
Because most people do stuff like try to increase the number of companies in the space.
And even though AI isn’t like nukes yet, at one point it will be.
Just like you wouldn’t want as many companies building nukes as possible—you’d either want a few highly vetted companies or a government effort—you don’t want as many companies building AGI as possible.
I don’t know the exact dates, but: a)proof-based methods seem to be receiving a lot of attention b) def/acc is becoming more of a thing c) more focus on concentration of power risk (tbh, while there are real risks here, I suspect most work here is net-negative)
My question is why do you consider most work on concentration of power risk net-negative?
Super terse answer:
Because most people do stuff like try to increase the number of companies in the space.
And even though AI isn’t like nukes yet, at one point it will be.
Just like you wouldn’t want as many companies building nukes as possible—you’d either want a few highly vetted companies or a government effort—you don’t want as many companies building AGI as possible.