That’s not a very fair comparison! You’re looking at the most detailed version of a neural network (which I would reject as a model anyway for the very reason that it needs much more resources than real brains to work) and comparing it to a simple utility-based model, and then sneaking in your intuitions for the UBM, but not the neural network (as RobinZ noted).
I could just as easily turn the tables and compare the second neural network here to a UDT-like utility-based model, where you have to compute your action in every possible scenario, no matter how improbable.
Anyway, I was criticizing utility-based models, in which you weight the possible outcomes by their probability. That involves a lot more than the vague notion that an animal “likes food and sex”.
Of course, as you note, even knowing that it likes food and sex gives some insight. But it clearly breaks down here: the dog’s decision to bark is made very quickly, and having to do an actual human-insight-free, algorithmic computation of expected utilities, involving estimates of their probabilities, takes way too long to be a realistic model. The shortcuts used in a neural network skew the dog’s actions is predictable ways, showing them to be a better model, and showing the value/belief distinction to break down.
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
A utility based model is useful—in part—since it abstracts those beliefs away.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?
That’s not a very fair comparison! You’re looking at the most detailed version of a neural network (which I would reject as a model anyway for the very reason that it needs much more resources than real brains to work) and comparing it to a simple utility-based model, and then sneaking in your intuitions for the UBM, but not the neural network (as RobinZ noted).
I could just as easily turn the tables and compare the second neural network here to a UDT-like utility-based model, where you have to compute your action in every possible scenario, no matter how improbable.
Anyway, I was criticizing utility-based models, in which you weight the possible outcomes by their probability. That involves a lot more than the vague notion that an animal “likes food and sex”.
Of course, as you note, even knowing that it likes food and sex gives some insight. But it clearly breaks down here: the dog’s decision to bark is made very quickly, and having to do an actual human-insight-free, algorithmic computation of expected utilities, involving estimates of their probabilities, takes way too long to be a realistic model. The shortcuts used in a neural network skew the dog’s actions is predictable ways, showing them to be a better model, and showing the value/belief distinction to break down.
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?