That last point is new and interesting to me. By converging the models I assume you mean aligning AGI. What flaw is this reintroducing into the system? Are you saying AGI shouldn’t do what people want because people want dumb things?
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.
That last point is new and interesting to me. By converging the models I assume you mean aligning AGI. What flaw is this reintroducing into the system? Are you saying AGI shouldn’t do what people want because people want dumb things?
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.