I don’t work in cyber security, so others will have to teach me.
I’m interested in the question of how AI systems can become private. How to make communications with an AI system as protected as the confessional. Some AI capabilities are throttled not for public interest reasons but because if those private conversations became public, the company would suffer reputational damage.
I’m not libertarian enough to mind that AI companies don’t allow certain unsavory conversations to occur, but I do think they could be more permissive if there were less risk of blowback.
A lot of high value uses of a eyes is impossible without data security of the inputs and outputs. Sensitive financial information, State secrets, health data: this isn’t information you can just hand over to an AI company no matter the promise of security.
Similarly a lot of individuals are going to want to cordon off certain parts of their life, including their own mental health.
The obvious answer is to have locally hosted AI. However even vast improvements in data cleaning and algorithmic learning are unlikely to get us acceptably high performance.
You could start out with your local Host, send an encrypted file, and receive an encrypted file from a huge Network hosted model. But I don’t see how that model could interact with that encrypted file not being trained on that type of thing as an input. There’s no point in sending the key along with it.
Or is there?
If there is an encryption and decryption layer in the AI system for the inputs and the outputs, an AI service could probably use zero knowledge proofs (or something else) to help create trust that they do not have method to read your messages. At the very least this would help with blocking out third parties.
But I don’t know enough about software architecture for creating an audit that would show the AI company did not have access to the unencrypted input or output.
Freedom and Privacy of Thought Architectures
I don’t work in cyber security, so others will have to teach me.
I’m interested in the question of how AI systems can become private. How to make communications with an AI system as protected as the confessional. Some AI capabilities are throttled not for public interest reasons but because if those private conversations became public, the company would suffer reputational damage.
I’m not libertarian enough to mind that AI companies don’t allow certain unsavory conversations to occur, but I do think they could be more permissive if there were less risk of blowback.
A lot of high value uses of a eyes is impossible without data security of the inputs and outputs. Sensitive financial information, State secrets, health data: this isn’t information you can just hand over to an AI company no matter the promise of security.
Similarly a lot of individuals are going to want to cordon off certain parts of their life, including their own mental health.
The obvious answer is to have locally hosted AI. However even vast improvements in data cleaning and algorithmic learning are unlikely to get us acceptably high performance.
You could start out with your local Host, send an encrypted file, and receive an encrypted file from a huge Network hosted model. But I don’t see how that model could interact with that encrypted file not being trained on that type of thing as an input. There’s no point in sending the key along with it.
Or is there?
If there is an encryption and decryption layer in the AI system for the inputs and the outputs, an AI service could probably use zero knowledge proofs (or something else) to help create trust that they do not have method to read your messages. At the very least this would help with blocking out third parties.
But I don’t know enough about software architecture for creating an audit that would show the AI company did not have access to the unencrypted input or output.