Yeah, I think that’s a sensible concern to have. On the other hand, at that point we’d be relatively few bits-of-optimization away from a much better situation than today: adjusting liability laws to better target actual safety, in a world where liability is already a de-facto decision driver at AI companies, is a much easier problem than causing AI companies to de-novo adopt decision-driving-processes which can actually block deployments.
If liability law moves AI companies that much, I’d expect the AI companies to have already made the easy, and moderately difficult changes you’re able to make to that liability process, so it seems unlikely to be able to be changed much.
I expect that once liability law goes in place, and AI companies learn they can’t ignore it, they too would lobby the government with more resources & skill than safety advocates, likely not destroying the laws, but at least restricting their growth, including toward GCR relevant liabilities.
Ah, that makes sense. I would have agreed more a year ago. At this point, it’s looking like the general public is sufficiently scared of AGI that the AI companies might not be able to win that fight just by throwing resources at the problem. (And in terms of them having more skill than safety advocates… well, they’ll hire people with good-looking resumes, but remember that the dysfunctionality of large companies still applies here.)
More generally, I do think that some kind of heads-up fight with AI companies is an unavoidable element of any useful policy plan at this point. The basic problem is to somehow cause an AI company to not deploy a model when deployment would probably make the company a bunch of money, and any plan to achieve that via policy unavoidably means fighting the companies.
Yeah, I think that’s a sensible concern to have. On the other hand, at that point we’d be relatively few bits-of-optimization away from a much better situation than today: adjusting liability laws to better target actual safety, in a world where liability is already a de-facto decision driver at AI companies, is a much easier problem than causing AI companies to de-novo adopt decision-driving-processes which can actually block deployments.
If liability law moves AI companies that much, I’d expect the AI companies to have already made the easy, and moderately difficult changes you’re able to make to that liability process, so it seems unlikely to be able to be changed much.
That sentence is not parsing for me, could you please reword?
I expect that once liability law goes in place, and AI companies learn they can’t ignore it, they too would lobby the government with more resources & skill than safety advocates, likely not destroying the laws, but at least restricting their growth, including toward GCR relevant liabilities.
Ah, that makes sense. I would have agreed more a year ago. At this point, it’s looking like the general public is sufficiently scared of AGI that the AI companies might not be able to win that fight just by throwing resources at the problem. (And in terms of them having more skill than safety advocates… well, they’ll hire people with good-looking resumes, but remember that the dysfunctionality of large companies still applies here.)
More generally, I do think that some kind of heads-up fight with AI companies is an unavoidable element of any useful policy plan at this point. The basic problem is to somehow cause an AI company to not deploy a model when deployment would probably make the company a bunch of money, and any plan to achieve that via policy unavoidably means fighting the companies.