Just because this isn’t cheap relative to the world GDP doesn’t mean it’s enough. If our goal was “build a Dyson sphere” even throwing our whole productivity towards it would be cheap. I’m not saying there aren’t any concerns, but the money is still mostly going to capabilites and safety, while a concern, still needs to be compromised also with commercial needs and race dynamics—albeit mercifully dampened. Honestly with LeCun’s position we’re just lucky that Meta isn’t that good at AI, or they alone would set the pace of the race for everyone else.
I think Meta have been somewhat persuaded by the Biden administration to sign on for safety, or at least for safety-theatre, despite LeCun. They actually did a non-trivial amount of real safety work on Llama-2 (a model small enough not to need it), and then never released one size of it for safety reasons.. Which was of course pointless, or more exactly just showing off, since they then open-sourced the weights, including to the base models, so anyone with $200 can fine-tune their safety work out again. However, it’s all basically window dressing, as these models are (we believe) too small to be an x-risk, and they were reasonably certain of that before they started (as far as we know, about the worst these models can do is write badly-written underage porn or phishing emails, or similarly marginally assist criminals.)
Obviously no modern models are an existential risk, the problem is the trajectory. Does the current way of handling the situation extrapolate properly to even just AGI, something that is an open goal for many of these companies? I’d say not, or at least, I very much doubt it. As in, if you’re not doing that kind of work inside a triple-airgapped and firewalled desert island and planning for layers upon layers of safety testing before even considering releasing the resulting product as a commercial tool, you’re doing it wrong—and that’s just for technical safety. I still haven’t seen a serious proposal of how do you make human labor entirely unnecessary and maintain a semblance of economic order instead of collapsing every social and political structure at once.
Just because this isn’t cheap relative to the world GDP doesn’t mean it’s enough. If our goal was “build a Dyson sphere” even throwing our whole productivity towards it would be cheap. I’m not saying there aren’t any concerns, but the money is still mostly going to capabilites and safety, while a concern, still needs to be compromised also with commercial needs and race dynamics—albeit mercifully dampened. Honestly with LeCun’s position we’re just lucky that Meta isn’t that good at AI, or they alone would set the pace of the race for everyone else.
I think Meta have been somewhat persuaded by the Biden administration to sign on for safety, or at least for safety-theatre, despite LeCun. They actually did a non-trivial amount of real safety work on Llama-2 (a model small enough not to need it), and then never released one size of it for safety reasons.. Which was of course pointless, or more exactly just showing off, since they then open-sourced the weights, including to the base models, so anyone with $200 can fine-tune their safety work out again. However, it’s all basically window dressing, as these models are (we believe) too small to be an x-risk, and they were reasonably certain of that before they started (as far as we know, about the worst these models can do is write badly-written underage porn or phishing emails, or similarly marginally assist criminals.)
Obviously no modern models are an existential risk, the problem is the trajectory. Does the current way of handling the situation extrapolate properly to even just AGI, something that is an open goal for many of these companies? I’d say not, or at least, I very much doubt it. As in, if you’re not doing that kind of work inside a triple-airgapped and firewalled desert island and planning for layers upon layers of safety testing before even considering releasing the resulting product as a commercial tool, you’re doing it wrong—and that’s just for technical safety. I still haven’t seen a serious proposal of how do you make human labor entirely unnecessary and maintain a semblance of economic order instead of collapsing every social and political structure at once.