Thanks for the useful post. There are a lot of things to like about this. Here are a few questions/comments.
First, I appreciate the transparency around this point.
we advocate for what we see as the ‘minimal viable policy’ for creating a good AI ecosystem, and we will be open to feedback.
Second, I have a small question. Why the use of the word “testing” instead of “auditing”? In my experience, most of the conversations lately about this kind of thing revolve around the term “auditing.”
Third, I wanted to note that this post does not talk about access levels for testers and ask why this is the case. My thoughts here relate to this recent work. I would have been excited to see (1) a commitment to providing auditors with methodological details, documentation, and data or (2) a commitment to working on APIs and SREs that can allow for auditors to be securely given grey- and white-box access.
Thanks for the useful post. There are a lot of things to like about this. Here are a few questions/comments.
First, I appreciate the transparency around this point.
Second, I have a small question. Why the use of the word “testing” instead of “auditing”? In my experience, most of the conversations lately about this kind of thing revolve around the term “auditing.”
Third, I wanted to note that this post does not talk about access levels for testers and ask why this is the case. My thoughts here relate to this recent work. I would have been excited to see (1) a commitment to providing auditors with methodological details, documentation, and data or (2) a commitment to working on APIs and SREs that can allow for auditors to be securely given grey- and white-box access.