without access to fine-tuning or powerful scaffolding.
Note that normally it’s the end user who decides whether they’re going to do scaffolding, not the lab. It’s probably feasible but somewhat challenging to prevent end users from doing powerful scaffolding (and I’m not even sure how you’d define that).
Yes but possibly the lab has its own private scaffolding which is better for its model than any other existing scaffolding, perhaps because it trained the model to use its specific scaffolding, and it can initially not allow users to use that.
(Maybe it’s impossible to give API access to scaffolding and keep the scaffolding private? Idk.)
I thought that the point was that either managed-interface-only access, or API access with rate limits, monitoring, and an appropriate terms of service, can prevent use of some forms of scaffolding. If it’s staged release, this makes sense to do, at least for a brief period while confirming that there are not security or safety issues.
Note that normally it’s the end user who decides whether they’re going to do scaffolding, not the lab. It’s probably feasible but somewhat challenging to prevent end users from doing powerful scaffolding (and I’m not even sure how you’d define that).
Yes but possibly the lab has its own private scaffolding which is better for its model than any other existing scaffolding, perhaps because it trained the model to use its specific scaffolding, and it can initially not allow users to use that.
(Maybe it’s impossible to give API access to scaffolding and keep the scaffolding private? Idk.)
Edit: Plus what David says.
I thought that the point was that either managed-interface-only access, or API access with rate limits, monitoring, and an appropriate terms of service, can prevent use of some forms of scaffolding. If it’s staged release, this makes sense to do, at least for a brief period while confirming that there are not security or safety issues.