There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
they are a universal modelling tool
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.
There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.