Then, we can have a single extremely basic machine (e.g. embedded device/FPGA which doesn’t even have an OS) implement an extremely limited API to connect to the outside world to service an API. This reduces the internet facing attack surface at the cost of some convenience.
This is an extremely good idea and you can see the physical evidence all around you of our predecessors solving a similar problem.
Have you ever noticed how every electrical panel is in a metal box, and every high power appliance is in a metal case? Every building has a gap between it and neighbors?
Its the same concept applied. Fire is just 3 ingredients but humans don’t fully understand the plasma dynamics. What we do know is you can’t take any chances, and you must subdivide the world with fire breaks and barriers to contain the most likely sources of ignition.
A world that keeps AI from burning out of control is one where these hardware ASICs—firewalls—guard the network interfaces for every cluster capable of hosting an AI model. This reduces how much coordination models can do with each other, and where they can escape to.
You don’t just want to restrict API you want to explicitly define what systems a hosted model can communicate with. Ideally a specific session can only reach a paid user, any systems the user has mapped to it, and it does research via a cached copy of the internet not a global one, so the AI models cannot coordinate with each other. (So these are more than firewalls and have functions similar to VPN gateways)
This is an extremely good idea and you can see the physical evidence all around you of our predecessors solving a similar problem.
Have you ever noticed how every electrical panel is in a metal box, and every high power appliance is in a metal case? Every building has a gap between it and neighbors?
Its the same concept applied. Fire is just 3 ingredients but humans don’t fully understand the plasma dynamics. What we do know is you can’t take any chances, and you must subdivide the world with fire breaks and barriers to contain the most likely sources of ignition.
A world that keeps AI from burning out of control is one where these hardware ASICs—firewalls—guard the network interfaces for every cluster capable of hosting an AI model. This reduces how much coordination models can do with each other, and where they can escape to.
You don’t just want to restrict API you want to explicitly define what systems a hosted model can communicate with. Ideally a specific session can only reach a paid user, any systems the user has mapped to it, and it does research via a cached copy of the internet not a global one, so the AI models cannot coordinate with each other. (So these are more than firewalls and have functions similar to VPN gateways)
There would have to be a “fire code” for AI.