Since I’m often annoyed when my posts are downvoted without explanation, and I saw that this post was downvoted, I’ll try to explain the downvotes.
Updating of values happens all the time; it’s called operant conditioning. If my dog barks and immediately is poked with a hot poker, its value of barking is updated. This is a useful adaptation, as being poked with a hot poker decreases fitness. If my dog tries to mate and immediately receives an electric shock, its value of making is decreased. This is a harmful adaptation, as mating is a more fundamental fitness factor than electric shocks.
So, you seem to be explaining an observation that is not observed using a fact that is not true.
Your disagreement apparently arises though using the term “value” in a different sense from me. If it helps you to understand, I am talking about what are sometimes called “ultimate values”.
Most organisms don’t update their values. They value the things evolution built into them—food, sex, warmth, freedom from pain, etc. Their values typically remain unchanged throughout their lives.
From my perspective, the dog’s values aren’t changed in your example. The dog merely associates barking with pain. The belief that a bark is likely to be followed by a poker prod is a belief, not a value. The dog still values pain-avoidance—just as it always did.
We actually have some theory that indicates that true values should change rarely. Organisms should protect their values—since changes to their values are seen as being very “bad”—in the context of the current values. Also, evolution wires in fitness-promoting values. These ideas help to explain why fixed values are actually extremely common.
Those are good points, but I still find your argument problematic.
First, do you know that dogs are capable of the abstract thought necessary to represent causality? You’re saying that the dog has added the belief “bark causes pain”, which combines with “pain bad”.
That may be how a programmer would try to represent it, since you can rely on the computational power necessary to sweep through the search space quickly and find the “pain bad” module every time a “reason to bark” comes up. But is it good as a biological model? It requires the dog to indefinitely keep a concept of a prod in memory.
A simpler biological mechanism, consistent with the rest of neurobiology, would be to just lower the connection strengths that lead to the “barking” neuron so that it requires more activation of other “barking causes” to make it fire (and thus make the dog bark). I think that’s a more reasonable model of how operant conditioning works in this context.
This mechanism, in turn, is better described as lowering the “shouldness” of barking, which is ambiguous with respect to whether it’s a value or belief.
It seems to be a common criticism of utility-based models that they no not map directly onto underlying biological hardware.
That is true—but it is not what such models are for in the first place. Nobody thinks that if you slice open an animal you will find a utility function, and some representation of utility inside.
The idea is more that you could build a functionally equivalent model which exhibited such an architecture—and then gain insight into the behaviour of the model by examining its utility function.
I’m concerned with the weaker constraint that the model must conceptually map to the biological hardware, and in this respect the utility-based model you gave doesn’t work. There is no distinction, even conceptual, between values and beliefs: just synaptic weights from the causes-of-barking nodes, to the bark node.
Furthermore, the utility-based model does not give insight, because the “shortcuts” resulting from the neural hardware are fundamental to its operation. For example, the fact that it comes up with a quick, simple calculation affects how many options can be considered and therefore whether e.g. value transitivity will break down.
So the utility-based model is more complex than a neural network, and with worse predictive power, so it doesn’t let you claim that its change in behavior resulted from beliefs rather than values.
Values are fixed, while many beliefs vary in response to sensory input.
You don’t seem to appreciate the value of a utility based analysis.
Knowing that an animal likes food and sex, and doesn’t like being hit provides all kinds of insights into its behaviour.
Such an analysis is much simpler than a neural network is, and it has the advantage that we can actually build and use the model—rather than merely dream about doing so in the far future, when computers are big enough to handle it, and neuroscience has advanced sufficiently.
That’s not a very fair comparison! You’re looking at the most detailed version of a neural network (which I would reject as a model anyway for the very reason that it needs much more resources than real brains to work) and comparing it to a simple utility-based model, and then sneaking in your intuitions for the UBM, but not the neural network (as RobinZ noted).
I could just as easily turn the tables and compare the second neural network here to a UDT-like utility-based model, where you have to compute your action in every possible scenario, no matter how improbable.
Anyway, I was criticizing utility-based models, in which you weight the possible outcomes by their probability. That involves a lot more than the vague notion that an animal “likes food and sex”.
Of course, as you note, even knowing that it likes food and sex gives some insight. But it clearly breaks down here: the dog’s decision to bark is made very quickly, and having to do an actual human-insight-free, algorithmic computation of expected utilities, involving estimates of their probabilities, takes way too long to be a realistic model. The shortcuts used in a neural network skew the dog’s actions is predictable ways, showing them to be a better model, and showing the value/belief distinction to break down.
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
A utility based model is useful—in part—since it abstracts those beliefs away.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?
You have to have complicated scientists around to construct any scientific model—be it utility-based or ANN.
Since we have plenty of scientists around, I don’t see much point in hypothesizing that there aren’t any.
You seem to be implying that the complexity of utility based models lies in those who invent or use them. That seems to be mostly wrong to me: it doesn’t matter who invented them, and fairly simple computer programs can still use them.
You said that the dog had a belief that a bark is always followed by a poker prod. This posits separate entities and a way that they interact, which looks to me like abstract thought.
Hm, I never before realized that operant conditioning is a blurring of the beliefs and values—the new frequency of barking can be explained either by a change of the utility of barking, or by a change in the belief about what will result from the barking.
IMO, “a blurring of beliefs and values” is an unhelpful way of looking at what happens. It is best to consider an agent as valuing freedom from pain, and the association between barking and poker prods to be one of its beliefs.
If you have separated out values from beliefs in a way that leads to frequently updated values, all that means is that you have performed the abstraction incorrectly.
Because a comment is down-voted, that doesn’t mean it is incorrect.
This particular comment implicitly linked people’s values to their reproductive success. People don’t like to hear that they are robot vehicles built to propagate their genes. It offends their sense of self-worth. Their mental marketing department spends all day telling everyone what an altruistic and nice person they are—and they repeat it so many times that they come to believe it themselves. That way their message comes across with sincerity. So: the possibility of biology underlying their motives is a truth that they often want to bury—and place as far out of sight as possible.
While we can never escape our biology entirely, I dispute any suggestion that the selfish gene is always the best level of abstraction, or best model, for human behavior. I assume you agree even though that did not come across in this paragraph.
Humans behaviour is often illuminated by the concept of memes. Humans are also influenced by the genes of their pathogens (or other manipulators). If you cough or sneeze, that behaviour is probably not occurring since it benefits you.
Similarly with cancer or back pain—not everything is an adaptation.
Since I’m often annoyed when my posts are downvoted without explanation, and I saw that this post was downvoted, I’ll try to explain the downvotes.
Updating of values happens all the time; it’s called operant conditioning. If my dog barks and immediately is poked with a hot poker, its value of barking is updated. This is a useful adaptation, as being poked with a hot poker decreases fitness. If my dog tries to mate and immediately receives an electric shock, its value of making is decreased. This is a harmful adaptation, as mating is a more fundamental fitness factor than electric shocks.
So, you seem to be explaining an observation that is not observed using a fact that is not true.
Your disagreement apparently arises though using the term “value” in a different sense from me. If it helps you to understand, I am talking about what are sometimes called “ultimate values”.
Most organisms don’t update their values. They value the things evolution built into them—food, sex, warmth, freedom from pain, etc. Their values typically remain unchanged throughout their lives.
From my perspective, the dog’s values aren’t changed in your example. The dog merely associates barking with pain. The belief that a bark is likely to be followed by a poker prod is a belief, not a value. The dog still values pain-avoidance—just as it always did.
We actually have some theory that indicates that true values should change rarely. Organisms should protect their values—since changes to their values are seen as being very “bad”—in the context of the current values. Also, evolution wires in fitness-promoting values. These ideas help to explain why fixed values are actually extremely common.
Those are good points, but I still find your argument problematic.
First, do you know that dogs are capable of the abstract thought necessary to represent causality? You’re saying that the dog has added the belief “bark causes pain”, which combines with “pain bad”.
That may be how a programmer would try to represent it, since you can rely on the computational power necessary to sweep through the search space quickly and find the “pain bad” module every time a “reason to bark” comes up. But is it good as a biological model? It requires the dog to indefinitely keep a concept of a prod in memory.
A simpler biological mechanism, consistent with the rest of neurobiology, would be to just lower the connection strengths that lead to the “barking” neuron so that it requires more activation of other “barking causes” to make it fire (and thus make the dog bark). I think that’s a more reasonable model of how operant conditioning works in this context.
This mechanism, in turn, is better described as lowering the “shouldness” of barking, which is ambiguous with respect to whether it’s a value or belief.
It seems to be a common criticism of utility-based models that they no not map directly onto underlying biological hardware.
That is true—but it is not what such models are for in the first place. Nobody thinks that if you slice open an animal you will find a utility function, and some representation of utility inside.
The idea is more that you could build a functionally equivalent model which exhibited such an architecture—and then gain insight into the behaviour of the model by examining its utility function.
I’m concerned with the weaker constraint that the model must conceptually map to the biological hardware, and in this respect the utility-based model you gave doesn’t work. There is no distinction, even conceptual, between values and beliefs: just synaptic weights from the causes-of-barking nodes, to the bark node.
Furthermore, the utility-based model does not give insight, because the “shortcuts” resulting from the neural hardware are fundamental to its operation. For example, the fact that it comes up with a quick, simple calculation affects how many options can be considered and therefore whether e.g. value transitivity will break down.
So the utility-based model is more complex than a neural network, and with worse predictive power, so it doesn’t let you claim that its change in behavior resulted from beliefs rather than values.
Values are fixed, while many beliefs vary in response to sensory input.
You don’t seem to appreciate the value of a utility based analysis.
Knowing that an animal likes food and sex, and doesn’t like being hit provides all kinds of insights into its behaviour.
Such an analysis is much simpler than a neural network is, and it has the advantage that we can actually build and use the model—rather than merely dream about doing so in the far future, when computers are big enough to handle it, and neuroscience has advanced sufficiently.
That’s not a very fair comparison! You’re looking at the most detailed version of a neural network (which I would reject as a model anyway for the very reason that it needs much more resources than real brains to work) and comparing it to a simple utility-based model, and then sneaking in your intuitions for the UBM, but not the neural network (as RobinZ noted).
I could just as easily turn the tables and compare the second neural network here to a UDT-like utility-based model, where you have to compute your action in every possible scenario, no matter how improbable.
Anyway, I was criticizing utility-based models, in which you weight the possible outcomes by their probability. That involves a lot more than the vague notion that an animal “likes food and sex”.
Of course, as you note, even knowing that it likes food and sex gives some insight. But it clearly breaks down here: the dog’s decision to bark is made very quickly, and having to do an actual human-insight-free, algorithmic computation of expected utilities, involving estimates of their probabilities, takes way too long to be a realistic model. The shortcuts used in a neural network skew the dog’s actions is predictable ways, showing them to be a better model, and showing the value/belief distinction to break down.
I am still not very sympathetic to the idea that neural network models are simple. They include the utility function and all the creature’s beliefs.
A utility based model is useful—in part—since it abstracts those beliefs away.
Plus neural network models are renowned for being opaque and incomprehensible.
You seem to have some strange beliefs in this area. AFAICS, you can’t make blanket statements like: neural-net models are more accurate. Both types of model can represent observed behaviour to any desired degree of precision.
You’re using a narrower definition of neural network than I am. Again, refer to the last link I gave for an example of a simple neural network, which is equal to or less than the complexity of typical expected utility models. That NN is far from being opaque and incomprehensible, wouldn’t you agree?
No, they just have activation weights, which don’t (afaict) distinguish between beliefs and values, or at least, don’t distinguish between “barking causes a prod which is bad” and “barking isn’t as good (or perhaps, as ‘shouldish’)”.
The UBMs discussed in this context (see TL post) necessarily include probability weightings, which are used to compute expected utility, which factors in the tradeoffs between probability of an event and its utility. So it’s certainly not abstracting those beliefs away.
Plus, you’ve spent the whole conversation explaining why your UBM of the dog allows you to classify the operant conditioning (of prodding the dog when it barks) as changing it’s beliefs and NOT its values. Do you remember that?
Correct me if I’m wrong, but it’s only simpler if you already have a general-purpose optimizer ready to hand—in this case, you.
You have to have complicated scientists around to construct any scientific model—be it utility-based or ANN.
Since we have plenty of scientists around, I don’t see much point in hypothesizing that there aren’t any.
You seem to be implying that the complexity of utility based models lies in those who invent or use them. That seems to be mostly wrong to me: it doesn’t matter who invented them, and fairly simple computer programs can still use them.
If you’ve seen it work, I’ll take your word for it.
Incidentally, I did not claim that dogs can perform abstract thinking—I’m not clear on where you are getting that idea from.
You said that the dog had a belief that a bark is always followed by a poker prod. This posits separate entities and a way that they interact, which looks to me like abstract thought.
The definition of “abstract thought” seems like a can of worms to me.
I don’t really see why I should go there.
Hm, I never before realized that operant conditioning is a blurring of the beliefs and values—the new frequency of barking can be explained either by a change of the utility of barking, or by a change in the belief about what will result from the barking.
IMO, “a blurring of beliefs and values” is an unhelpful way of looking at what happens. It is best to consider an agent as valuing freedom from pain, and the association between barking and poker prods to be one of its beliefs.
If you have separated out values from beliefs in a way that leads to frequently updated values, all that means is that you have performed the abstraction incorrectly.
Or the dog values not being in pain more than it values barking or mating...
Because a comment is down-voted, that doesn’t mean it is incorrect.
This particular comment implicitly linked people’s values to their reproductive success. People don’t like to hear that they are robot vehicles built to propagate their genes. It offends their sense of self-worth. Their mental marketing department spends all day telling everyone what an altruistic and nice person they are—and they repeat it so many times that they come to believe it themselves. That way their message comes across with sincerity. So: the possibility of biology underlying their motives is a truth that they often want to bury—and place as far out of sight as possible.
While we can never escape our biology entirely, I dispute any suggestion that the selfish gene is always the best level of abstraction, or best model, for human behavior. I assume you agree even though that did not come across in this paragraph.
Humans behaviour is often illuminated by the concept of memes. Humans are also influenced by the genes of their pathogens (or other manipulators). If you cough or sneeze, that behaviour is probably not occurring since it benefits you.
Similarly with cancer or back pain—not everything is an adaptation.