In engineering and design, there is a process that includes, among other stages, specification, creation, verification and validation, and deployment. Verification and validation are where most people focus when thinking about safety—can we make sure the system performs correctly? I think this is a conceptual error that I want to address.
It seems that you put the first two sentences “in the mouth of people outside of AI safety”, and they describe some conceptual error, while the third sentence is “yours”. However, I don’t understand what exactly is the error you are trying to correct because the first sentence is uncontroversial, and the second sentence is a question, so I don’t understand what (erroneous) idea does it express. It’s really unclear what you are trying to say here.
I did not say anything about evals and red teaming in application to AI, other than in comments where I said I think they are a great idea.
I don’t understand how else to interpret the sentence from the post “If we lack a standard for safety, ideally one where there is consensus that it is sufficient for a specific application, then exploration or verification of the safety of a machine learning model is meaningless.”, because to me, evals and red teaming are “exploration and verification of the safety of a machine learning model” (unless you want to say that the word “verification” cannot apply if there are no standards, but then just replace it with “checking”). So, again, I’m very confused about what you are trying to say :(
Perhaps it’s outdated, but it is the understanding which engineers who I have spoken to who work on reliability and systems engineering actually have, and it matches research I did on resilience most of a decade ago, e.g. this.
My statement that you import an outdated view was based on that I understood that you declared “evals and red teaming meaningless in the absence of standards”. If this is not your statement, there is no import of outdated understanding.
I mean, standards are useful. They are sort of like industry-wide, strictly imposed “checklists”, and checklists do help with reliability overall. When checklists are introduced, the number of incidents goes down reliably. But, it’s also recognised that it doesn’t go down to zero, and the presence of a standard shouldn’t reduce the vigilance of anyone involved, especially when we are dealing with such a high stakes thing as AI.
So, introducing standards of AI safety based on some evals and red teaming benchmarks would be good. While cultivating a shared recognition that these “standards” absolutely don’t guarantee safety, and marketing, PR, GR, and CEOs shouldn’t use the phrases like “our system is safe because it complies with the standard”. Maybe to prevent abuse, this standard should be called something like “AI safety baseline standard”. Also, it’s important to recognise that the standard will mostly exist for the catching-up crowd of companies and orgs building AIs. Checking the most powerful and SoTA systems against the “standards” at the leading labs will be only a very small part of the safety and alignment engineering process that should lead to their release.
Do you agree with this? Which particular point from the two paragraphs above “people outside of AI safety” are confused about or don’t realise?
Ok, in this passage:
It seems that you put the first two sentences “in the mouth of people outside of AI safety”, and they describe some conceptual error, while the third sentence is “yours”. However, I don’t understand what exactly is the error you are trying to correct because the first sentence is uncontroversial, and the second sentence is a question, so I don’t understand what (erroneous) idea does it express. It’s really unclear what you are trying to say here.
I don’t understand how else to interpret the sentence from the post “If we lack a standard for safety, ideally one where there is consensus that it is sufficient for a specific application, then exploration or verification of the safety of a machine learning model is meaningless.”, because to me, evals and red teaming are “exploration and verification of the safety of a machine learning model” (unless you want to say that the word “verification” cannot apply if there are no standards, but then just replace it with “checking”). So, again, I’m very confused about what you are trying to say :(
My statement that you import an outdated view was based on that I understood that you declared “evals and red teaming meaningless in the absence of standards”. If this is not your statement, there is no import of outdated understanding.
I mean, standards are useful. They are sort of like industry-wide, strictly imposed “checklists”, and checklists do help with reliability overall. When checklists are introduced, the number of incidents goes down reliably. But, it’s also recognised that it doesn’t go down to zero, and the presence of a standard shouldn’t reduce the vigilance of anyone involved, especially when we are dealing with such a high stakes thing as AI.
So, introducing standards of AI safety based on some evals and red teaming benchmarks would be good. While cultivating a shared recognition that these “standards” absolutely don’t guarantee safety, and marketing, PR, GR, and CEOs shouldn’t use the phrases like “our system is safe because it complies with the standard”. Maybe to prevent abuse, this standard should be called something like “AI safety baseline standard”. Also, it’s important to recognise that the standard will mostly exist for the catching-up crowd of companies and orgs building AIs. Checking the most powerful and SoTA systems against the “standards” at the leading labs will be only a very small part of the safety and alignment engineering process that should lead to their release.
Do you agree with this? Which particular point from the two paragraphs above “people outside of AI safety” are confused about or don’t realise?