Summary: every point hugely advantages a small concentration of wealthy AI companies, and once these become legal requirements it will entrench them indefinitely. And it in no ways slows capabilities, in fact it seems to be implicitly giving permission to push them as far as possible.
The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
Effect on rich companies: they need to do extensive testing to deliver competitive products. Capabilities go hand in hand with reliability, every tool humanity uses that is capable is highly reliable.
Effect on poor companies : the testing burden prevents them from being able to gain early revenue on a shoddy product, preventing them from competing at all.
Effect on advancing capabilities : minimal
The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
Effect on rich companies: they need to pay for another group of staff/internal automation tools to deliver these information sharing reports, carefully scripted to look good/not reveal more than the legal minimum.
Effect on poor companies : the reporting burden reduces their runway further, preventing all but a few extremely well funded startups from existing at all
Effect on advancing capabilities : minimal
The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
Effect on rich companies: they already want to do this, this is how they protect their IP.
Effect on poor companies : the security burden reduces their runway further
Effect on advancing capabilities : minimal
The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.
This is the same as the reporting case
The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
Effect on rich companies: they already want to avoid legal responsibility for use of AI in deception. Stripping the watermark puts the liability on the scammer.
Effect on poor companies : watermarks slightly reduce their runway
Effect on advancing capabilities : minimal
The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
This is another form of reporting. Same effects as above
The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
Effect on rich companies: Now they need another internal group doing this research, for each AI company.
Effect on poor companies : Having to pay for another required internal group reduces their runway
Effect on advancing capabilities : minimal
The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.
Effect on rich companies: This is carte blanche to do what they already were planning to do. Also, this says ‘fuck AI pauses’, direct from the Biden administration. GPT-4 is not nearly capable enough to solve any of “societies greatest challenges”. It’s missing modalities and general ability to get anything but the simplest tasks accomplished reliably. To add those additional components will take far more compute, such as the multi-exaflop AI supercomputers everyone is building that obviously will allow models that dwarf GPT-4.
Effect on poor companies : Well they aren’t competing with megamodels, but they already were screwed by the other points
Just to organize this:
Summary: every point hugely advantages a small concentration of wealthy AI companies, and once these become legal requirements it will entrench them indefinitely. And it in no ways slows capabilities, in fact it seems to be implicitly giving permission to push them as far as possible.
The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
Effect on rich companies: they need to do extensive testing to deliver competitive products. Capabilities go hand in hand with reliability, every tool humanity uses that is capable is highly reliable.
Effect on poor companies : the testing burden prevents them from being able to gain early revenue on a shoddy product, preventing them from competing at all.
Effect on advancing capabilities : minimal
The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
Effect on rich companies: they need to pay for another group of staff/internal automation tools to deliver these information sharing reports, carefully scripted to look good/not reveal more than the legal minimum.
Effect on poor companies : the reporting burden reduces their runway further, preventing all but a few extremely well funded startups from existing at all
Effect on advancing capabilities : minimal
The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
Effect on rich companies: they already want to do this, this is how they protect their IP.
Effect on poor companies : the security burden reduces their runway further
Effect on advancing capabilities : minimal
The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.
This is the same as the reporting case
The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
Effect on rich companies: they already want to avoid legal responsibility for use of AI in deception. Stripping the watermark puts the liability on the scammer.
Effect on poor companies : watermarks slightly reduce their runway
Effect on advancing capabilities : minimal
The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
This is another form of reporting. Same effects as above
The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
Effect on rich companies: Now they need another internal group doing this research, for each AI company.
Effect on poor companies : Having to pay for another required internal group reduces their runway
Effect on advancing capabilities : minimal
The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.
Effect on rich companies: This is carte blanche to do what they already were planning to do. Also, this says ‘fuck AI pauses’, direct from the Biden administration. GPT-4 is not nearly capable enough to solve any of “societies greatest challenges”. It’s missing modalities and general ability to get anything but the simplest tasks accomplished reliably. To add those additional components will take far more compute, such as the multi-exaflop AI supercomputers everyone is building that obviously will allow models that dwarf GPT-4.
Effect on poor companies : Well they aren’t competing with megamodels, but they already were screwed by the other points
Effect on advancing capabilities :