In practice, it just requires hardware with limited functionality and physical security — hardware security modules exist.
An HSM-analogue for ML would be a piece of hardware that can have model weights loaded into its nonvolatile memory, can perform inference, but doesn’t provide a way to get the weights out. (If it’s secure enough against physical attack, it could also be used to run closed models on a user’s premises, etc.; there might be a market for that.)
Indeed! I was very close to writing a whole bit about TEEs, enclaves, and PUFs in my last comment, but I figured that it also boils down to “just don’t give it permission” so I left it out. I actually think designing secure hardware is incredibly interesting and there will probably be an increase in demand for secure computing environments and data provenance in the near future.
In practice, it just requires hardware with limited functionality and physical security — hardware security modules exist.
An HSM-analogue for ML would be a piece of hardware that can have model weights loaded into its nonvolatile memory, can perform inference, but doesn’t provide a way to get the weights out. (If it’s secure enough against physical attack, it could also be used to run closed models on a user’s premises, etc.; there might be a market for that.)
Indeed! I was very close to writing a whole bit about TEEs, enclaves, and PUFs in my last comment, but I figured that it also boils down to “just don’t give it permission” so I left it out. I actually think designing secure hardware is incredibly interesting and there will probably be an increase in demand for secure computing environments and data provenance in the near future.