As for the rest of your comment, I think we have different models, where you analogize negatively reinforcing AI systems to firing, which would be more applicable if we were training several systems.
I’m not analogizing negative reinforcement to firing, I’m analogizing firing to no longer using some AI. See catching AIs red-handed for more discussion.
I’m not analogizing negative reinforcement to firing, I’m analogizing firing to no longer using some AI. See catching AIs red-handed for more discussion.