In my accounting, the word “arbitrarily” saved me here. I do think I missed the middle ground of the sandboxed, limited programming environments like you.com and the current version of ChatGPT!
Fair enough, I was wondering how strongly you meant ‘arbitrarily’. I work for You.com, and we definitely quickly thought about things like malicious or careless use, rapidly came to the conclusion we needed a sandbox, investigated and found that facilities for running untrusted code in a sandbox are already pretty widely available both commercially and open-source, so this wasn’t very challenging for our security team to implement. What’s taking more time is security vetting and whitelisting Python libraries that are useful for common user use-cases, but don’t provide turn-key capabilities for plausible malicious misuses. Doing this is made easier by the fact that current LLMs cannot write very large amounts of bug-free code (and if they could, it’s trivial to see how much code has been written).
In my accounting, the word “arbitrarily” saved me here. I do think I missed the middle ground of the sandboxed, limited programming environments like you.com and the current version of ChatGPT!
Fair enough, I was wondering how strongly you meant ‘arbitrarily’. I work for You.com, and we definitely quickly thought about things like malicious or careless use, rapidly came to the conclusion we needed a sandbox, investigated and found that facilities for running untrusted code in a sandbox are already pretty widely available both commercially and open-source, so this wasn’t very challenging for our security team to implement. What’s taking more time is security vetting and whitelisting Python libraries that are useful for common user use-cases, but don’t provide turn-key capabilities for plausible malicious misuses. Doing this is made easier by the fact that current LLMs cannot write very large amounts of bug-free code (and if they could, it’s trivial to see how much code has been written).