I’m curious, but I think it’s generally agreed that human-mediated boxing isn’t an important part of any real solution to AI risk. Certainly, it’s part of slowing down early attempts, but once an AI gets powerful/smart enough, there’s no way to keep it in AND get useful results from it.
I’m curious, but I think it’s generally agreed that human-mediated boxing isn’t an important part of any real solution to AI risk. Certainly, it’s part of slowing down early attempts, but once an AI gets powerful/smart enough, there’s no way to keep it in AND get useful results from it.