I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
A utility based model is useful—in part—since it abstracts those beliefs away.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?