My p(doom) was low when I was predicting the yudkowsky model was ridiculous, due to machine learning knowledge I’ve had for a while. Now that we have AGI of the kind I was expecting, we have more people working on figuring out what the risks really are, and the previous concern of the only way to intelligence being RL seems to be only a small reassurance because non-imitation-learned RL agents who act in the real world is in fact scary. and recently, I’ve come to believe much of the risk is still real and was simply never about the kind of AI that has been created first, a kind of AI they didn’t believe was possible. If you previously fully believed yudkowsky, then yes, mispredicting what AI is possible should be an update down. But for me, having seen these unsupervised AIs coming from a mile away just like plenty of others did, I’m in fact still quite concerned about how desperate non-imitation-learned RL agents seem to tend to be by default, and I’m worried that hyperdesperate non-imitation-learned RL agents will be more evolutionarily fit, eat everything, and not even have the small consolation of having fun doing it.
I agree with RL agents being misaligned by default, even more so for the non-imitation-learned ones. I mean, even LLMs trained on human-generated data are misaligned by default, regardless of what definition of ‘alignment’ is being used. But even with misalignment by default, I’m just less convinced that their capabilities would grow fast enough to be able to cause an existential catastrophe in the near-term, if we use LLM capability improvement trends as a reference.
My p(doom) was low when I was predicting the yudkowsky model was ridiculous, due to machine learning knowledge I’ve had for a while. Now that we have AGI of the kind I was expecting, we have more people working on figuring out what the risks really are, and the previous concern of the only way to intelligence being RL seems to be only a small reassurance because non-imitation-learned RL agents who act in the real world is in fact scary. and recently, I’ve come to believe much of the risk is still real and was simply never about the kind of AI that has been created first, a kind of AI they didn’t believe was possible. If you previously fully believed yudkowsky, then yes, mispredicting what AI is possible should be an update down. But for me, having seen these unsupervised AIs coming from a mile away just like plenty of others did, I’m in fact still quite concerned about how desperate non-imitation-learned RL agents seem to tend to be by default, and I’m worried that hyperdesperate non-imitation-learned RL agents will be more evolutionarily fit, eat everything, and not even have the small consolation of having fun doing it.
upvote and disagree: your claim is well argued.
I agree with RL agents being misaligned by default, even more so for the non-imitation-learned ones. I mean, even LLMs trained on human-generated data are misaligned by default, regardless of what definition of ‘alignment’ is being used. But even with misalignment by default, I’m just less convinced that their capabilities would grow fast enough to be able to cause an existential catastrophe in the near-term, if we use LLM capability improvement trends as a reference.