I tend to think and I certainly hope that we aren’t looking at dangerous AGI at some small GPT-x iteration. ’Cause while the “pause” looks desirable in the abstract, it also seems unlikely to do much in practice.
But the thing I would to point out is; you have people looking the potential dangers of present AI, seeing regulation as a logical step, and then noticing that the regulatory system of modern states, especially the US, has become a complete disaster—corrupt, “adversarial” and ineffective.
Here, I’d like to point out that those caring about AI safety ought to care the general issues of mundane safety regulation because the present situation of it having been gutted (through regulatory capture, The Washington Monument syndrome, “starve the beast” ideology and so-forth) means that it’s not available for AI safety either.
I tend to think and I certainly hope that we aren’t looking at dangerous AGI at some small GPT-x iteration. ’Cause while the “pause” looks desirable in the abstract, it also seems unlikely to do much in practice.
But the thing I would to point out is; you have people looking the potential dangers of present AI, seeing regulation as a logical step, and then noticing that the regulatory system of modern states, especially the US, has become a complete disaster—corrupt, “adversarial” and ineffective.
Here, I’d like to point out that those caring about AI safety ought to care the general issues of mundane safety regulation because the present situation of it having been gutted (through regulatory capture, The Washington Monument syndrome, “starve the beast” ideology and so-forth) means that it’s not available for AI safety either.