So we need a way to have alignment deployed throughout the algorithmic world before anyone develops AGI. To do this, we’ll start by offering alignment as a service for more limited AIs.
I’m tentatively fairly excited about some version of this, so I’ll suggest some tweaks that can hopefully be helpful for your success (or for the brainstorming of anyone else who’s thinking about doing something similar in the future).
We will refine and develop this deployment plan, depending on research results, commercial opportunities, feedback, and suggestions.
I suspect there’d be much better commercial/scaling opportunities for a somewhat similar org that offered a more comprehensive, high-quality package of “trustworthy AI services”—e.g., addressing bias, privacy issues, and other more mainstream concerns along with safety/alignment concerns. Then there’d be less of a need to convince companies about paying for some new service—you would mostly just need to convince them that you’re the best provider of services that they’re already interested in. (Cf. ethical AI consulting companies that already exist.)
(One could ask: But wouldn’t the extra price be the same, whether you’re offering alignment in a package or separately? Not necessarily—IP concerns and transaction costs incentivize AI companies to reduce the number of third parties they share their algorithms with.)
As an additional benefit, a more comprehensive package of “trustworthy AI services” would be directly competing for consumers with companies like the AI consulting company mentioned above. This might pressure those companies to start offering safety/alignment services—a mechanism for broadening adoption that isn’t available to an org that only provides alignment services.
[From the website] We are hiring AI safety researchers, ML engineers and other staff.
Relatedly to the earlier point, given that commercial opportunities are a big potential bottleneck (in other words, given that selling limited alignment services might be as much of a communications and persuasion challenge as it is a technical challenge), my intuition would be to also put significant emphasis into hiring people who will kill it at the persuasion: people who are closely familiar with the market and regulatory incentives faced by relevant companies, people with sales and marketing experience, people with otherwise strong communications skills, etc. (in addition to the researchers and engineers).
Adding on to Mauricio’s idea: Also explore partnering with companies that offer a well-recognized, high-quality package of mainstream “trustworthy AI services”—e.g., addressing bias, privacy issues, and other more mainstream concerns—where you have comparative advantage on safety/alignment concerns and they have comparative advantage on the more mainstream concerns. Together with a partner, you could provide a more comprehensive offering. (That’s part of the value proposition for them. Also, of course, be sure to highlight the growing importance of safety/alignment issues, and the expertise you’d bring.) Then you wouldn’t have to compete in the areas where they have comparative advantage.
I’m tentatively fairly excited about some version of this, so I’ll suggest some tweaks that can hopefully be helpful for your success (or for the brainstorming of anyone else who’s thinking about doing something similar in the future).
I suspect there’d be much better commercial/scaling opportunities for a somewhat similar org that offered a more comprehensive, high-quality package of “trustworthy AI services”—e.g., addressing bias, privacy issues, and other more mainstream concerns along with safety/alignment concerns. Then there’d be less of a need to convince companies about paying for some new service—you would mostly just need to convince them that you’re the best provider of services that they’re already interested in. (Cf. ethical AI consulting companies that already exist.)
(One could ask: But wouldn’t the extra price be the same, whether you’re offering alignment in a package or separately? Not necessarily—IP concerns and transaction costs incentivize AI companies to reduce the number of third parties they share their algorithms with.)
As an additional benefit, a more comprehensive package of “trustworthy AI services” would be directly competing for consumers with companies like the AI consulting company mentioned above. This might pressure those companies to start offering safety/alignment services—a mechanism for broadening adoption that isn’t available to an org that only provides alignment services.
Relatedly to the earlier point, given that commercial opportunities are a big potential bottleneck (in other words, given that selling limited alignment services might be as much of a communications and persuasion challenge as it is a technical challenge), my intuition would be to also put significant emphasis into hiring people who will kill it at the persuasion: people who are closely familiar with the market and regulatory incentives faced by relevant companies, people with sales and marketing experience, people with otherwise strong communications skills, etc. (in addition to the researchers and engineers).
Adding on to Mauricio’s idea: Also explore partnering with companies that offer a well-recognized, high-quality package of mainstream “trustworthy AI services”—e.g., addressing bias, privacy issues, and other more mainstream concerns—where you have comparative advantage on safety/alignment concerns and they have comparative advantage on the more mainstream concerns. Together with a partner, you could provide a more comprehensive offering. (That’s part of the value proposition for them. Also, of course, be sure to highlight the growing importance of safety/alignment issues, and the expertise you’d bring.) Then you wouldn’t have to compete in the areas where they have comparative advantage.
We agree with this.
Thanks for the ideas! We’ll think on them.