This is an interesting possibility for a middle ground option between open-sourcing and fully private models. Do you have any estimates of how much it would cost an AI lab to do this, compared to the more straightforward option of open sourcing?
Some initial thoughts:
OpenAI have an API (which was evidently not prohibitively expensive to build). Would they be able to share any information about how long this took to develop, how useful they’ve found it, improvements for future iterations etc?
The AI developer enforcing the rules by monitoring usage could be labour-intensive to do thoroughly. And it may not be clear whether or not the model is being used for a prohibited application, particularly since those seeking to use it for prohibited reasons are incentivised to hide this.
Do you think it would be possible/valuable to build a common open source system for structured access, with basic features covering common use cases, available for use by any AI lab?
On monitoring, one reason for optimism is that it can be done in an automated way, especially as AI capabilities increase. But I agree that it might be possible to hide misuse. Looking at the user’s queries and the model’s predictions won’t always give enough information.
On a common open source system for structured access: yes, that’s something I’ve been thinking about recently, and I think it would be beneficial. OpenMined is doing relevant work in this area but, from what I can see, it’s still generally too neglected.
This is an interesting possibility for a middle ground option between open-sourcing and fully private models. Do you have any estimates of how much it would cost an AI lab to do this, compared to the more straightforward option of open sourcing?
Some initial thoughts:
OpenAI have an API (which was evidently not prohibitively expensive to build). Would they be able to share any information about how long this took to develop, how useful they’ve found it, improvements for future iterations etc?
The AI developer enforcing the rules by monitoring usage could be labour-intensive to do thoroughly. And it may not be clear whether or not the model is being used for a prohibited application, particularly since those seeking to use it for prohibited reasons are incentivised to hide this.
Do you think it would be possible/valuable to build a common open source system for structured access, with basic features covering common use cases, available for use by any AI lab?
On monitoring, one reason for optimism is that it can be done in an automated way, especially as AI capabilities increase. But I agree that it might be possible to hide misuse. Looking at the user’s queries and the model’s predictions won’t always give enough information.
On a common open source system for structured access: yes, that’s something I’ve been thinking about recently, and I think it would be beneficial. OpenMined is doing relevant work in this area but, from what I can see, it’s still generally too neglected.