Anthropic also says “To date, we’ve operated an invite-only bug bounty program in partnership with HackerOne that rewards researchers for identifying model safety issues in our publicly released AI models.” This is news, and they never published an application form for that. I wonder how long that’s been going on.
(Google, Microsoft, and Meta have bug bounty programs which include some model issues but exclude jailbreaks. OpenAI’s bug bounty program excludes model issues.)
Yay Anthropic for expanding its model safety bug bounty program, focusing on jailbreaks and giving participants pre-deployment access. Apply by next Friday.
Anthropic also says “To date, we’ve operated an invite-only bug bounty program in partnership with HackerOne that rewards researchers for identifying model safety issues in our publicly released AI models.” This is news, and they never published an application form for that. I wonder how long that’s been going on.
(Google, Microsoft, and Meta have bug bounty programs which include some model issues but exclude jailbreaks. OpenAI’s bug bounty program excludes model issues.)