I hear what you’re saying. I probably should have made the following distinction:
A technology in the abstract (e.g. nuclear fission, LLMs)
A technology deployed to do a thing (e.g. nuclear in a power plant, LLM used for customer service)
The question I understand you to be asking is essentially how do we make safety cases for AI agents generally? I would argue that’s more situation 1 than 2, and as I understand it safety cases are basically only ever applied to case 2. The nuclear facilities document you linked definitely is 2.
So yeah, admittedly the document you were looking for doesn’t exist, but that doesn’t really surprise me. If you started looking for narrowly scoped safety principles for AI systems you start finding them everywhere. For example, a search for “artificial intelligence” on the ISO website results in 73 standards .
Just a few relevant standards, though I admit, standards are exceptionally boring (also many aren’t public, which is dumb):
UL 4600 standard for autonomous vehicles
ISO/IEC TR 5469 standard for ai safety stuff generally (this one is decently interesting)
ISO/IEC 42001 this one covers what you do if you set up a system that uses AI
I hear what you’re saying. I probably should have made the following distinction:
A technology in the abstract (e.g. nuclear fission, LLMs)
A technology deployed to do a thing (e.g. nuclear in a power plant, LLM used for customer service)
The question I understand you to be asking is essentially how do we make safety cases for AI agents generally? I would argue that’s more situation 1 than 2, and as I understand it safety cases are basically only ever applied to case 2. The nuclear facilities document you linked definitely is 2.
So yeah, admittedly the document you were looking for doesn’t exist, but that doesn’t really surprise me. If you started looking for narrowly scoped safety principles for AI systems you start finding them everywhere. For example, a search for “artificial intelligence” on the ISO website results in 73 standards .
Just a few relevant standards, though I admit, standards are exceptionally boring (also many aren’t public, which is dumb):
UL 4600 standard for autonomous vehicles
ISO/IEC TR 5469 standard for ai safety stuff generally (this one is decently interesting)
ISO/IEC 42001 this one covers what you do if you set up a system that uses AI
You also might find this paper a good read: https://ieeexplore.ieee.org/document/9269875
This makes sense. Thanks for the resources!