I expect that once liability law goes in place, and AI companies learn they can’t ignore it, they too would lobby the government with more resources & skill than safety advocates, likely not destroying the laws, but at least restricting their growth, including toward GCR relevant liabilities.
Ah, that makes sense. I would have agreed more a year ago. At this point, it’s looking like the general public is sufficiently scared of AGI that the AI companies might not be able to win that fight just by throwing resources at the problem. (And in terms of them having more skill than safety advocates… well, they’ll hire people with good-looking resumes, but remember that the dysfunctionality of large companies still applies here.)
More generally, I do think that some kind of heads-up fight with AI companies is an unavoidable element of any useful policy plan at this point. The basic problem is to somehow cause an AI company to not deploy a model when deployment would probably make the company a bunch of money, and any plan to achieve that via policy unavoidably means fighting the companies.
That sentence is not parsing for me, could you please reword?
I expect that once liability law goes in place, and AI companies learn they can’t ignore it, they too would lobby the government with more resources & skill than safety advocates, likely not destroying the laws, but at least restricting their growth, including toward GCR relevant liabilities.
Ah, that makes sense. I would have agreed more a year ago. At this point, it’s looking like the general public is sufficiently scared of AGI that the AI companies might not be able to win that fight just by throwing resources at the problem. (And in terms of them having more skill than safety advocates… well, they’ll hire people with good-looking resumes, but remember that the dysfunctionality of large companies still applies here.)
More generally, I do think that some kind of heads-up fight with AI companies is an unavoidable element of any useful policy plan at this point. The basic problem is to somehow cause an AI company to not deploy a model when deployment would probably make the company a bunch of money, and any plan to achieve that via policy unavoidably means fighting the companies.