that taking a measurement is not guaranteed to accurately capture the world state, because a sensor can be faulty, and it is not always possible to distinguish a faulty sensor from a reliable sensor.
No. Dynomight already addressed this: if you have an unreliable sensor (ie. any sensor that has ever existed in the real world), then that simply reduces how useful it is, because it changes your posterior less than a more reliable one would. The VoI remains positive; I refer you to Ramsey and Savage on this particular point of decision theory.
All of your additional comments are generally wrong, and reflect an extremely rigid absolutist approach to making decisions. We use unreliable correlated measures all the time, this is in fact ‘technically true’ and that is is the point, and yes, your entire example of doctors is simply due to irrationality and does not refute the decision theory point and it has nothing to do with ‘being obvious it is a false positive’ except in the trivial sense that for a poor measure the posterior of a true positive remains far smaller than it being a false positive and may not motivate a decision, shrinking the VoI towards zero, which will frequently be so small as to not justify the cost of testing (explicitly pointed out by Dynomight). It is definitely the case that many tests cost too much for too little information and should not be run because the VoI is often zero (for a rational decision maker) and the test is simply a loss as it will not change any decisions. Nevertheless, the value of free information is always greater than or equal to zero, and if free information makes you worse off, that implies somewhere there is an irrationality.
(The really relevant problem with this in the context of medicine is that decision theory is considering single agents in a stochastic environment, an idealized physician ordering tests to try to optimize patient health, because game theory hasn’t been invented yet; when you bring in multiple agents with different goals and mechanisms like lawsuits, then free information can be quite harmful, but this too is not lost on most people, including Dynomight at the end.)
NOTE: I wrote this as a separate reply because it’s addressing your points about decision theory directly, and is not about the specific scenario discussed with the medical system.
if you have an unreliable sensor (ie. any sensor that has ever existed in the real world), then that simply reduces how useful it is, because it changes your posterior less than a more reliable one would.
I think the crux here is that you seem to be saying the usefulness of reading a sensor’s value is in some interval [0, 1], where 1 represents that the value provided by the sensor is perfectly trustworthy and 0 is that the value provided by the sensor is totally useless; i.e., it’s random noise. Under this belief, you’re saying that it is always rational to acquire as many sensors as possible, because there is no downside to acquiring useless sensors. When you run your filter over all of the sensors, anything that has a usefulness of 0 is going to get dropped from the final result. Likewise, low-but-non-zero usefulness sensors are weighted accordingly in the final result.
In my work, this is called sensor fusion. So far, so good.
I can argue that acquiring each sensor has a cost associated with it, but it seems like the idea of “free information” is intended to deflect that argument. Let us assume that the sensors are provided for free, and it’s just a question of “given an arbitrary number of sensors, with different usefulness, how many do you want to fuse when trying to model the correct world state?”
I think what you’ve said above implies that a rational actor should always want more sensors.
Nevertheless, the value of free information is always greater than or equal to zero, and if free information makes you worse off, that implies somewhere there is an irrationality.
More sensors leads to more sensor values (“information”), and the rational actor will simply use the usefulness of each sensor (which for the sake of argument we’ll assume that they know exactly) when weighting each sensor value.
In the real world, I still disagree with this claim. Computational complexity[1] exists. There is a cost to interpreting, and fusing, an arbitrary number of sensor values. Each additional sensor, even if it was provided by free, is going to incur an actual cost in computation before that value can be used to make a decision. A rational actor would not accept an arbitrary number of useless sensors if it is going to take non-zero computational cycles to disregard them.
When you include the cost of computation, now the value of those sensors is in some interval [0 - c, 1 - c], where c is how much it costs in computational effort[2] to include the sensor in your filter. In this world, sensors can have less than zero usefulness, i.e. it is actively detrimental to include the sensor in your filter. Your filter functions worse with that sensor than it does without it.
I believe the only way out of this is to ignore computational complexity and assume that c = 0, but we know that isn’t true. Consider the trivial thought experiment of me sitting here and providing you a series of useless facts about a fictional D&D campaign I’m running like, “A miraksaur is a type of dinosaur native to the planet Eurid.”, except the facts never stop. How rational would it be for you to keep trying to enter each additional value into your world state? They’re totally irrelevant, but if we ignore computational costs, there’s no downside to doing so. The reason why you should be wise to tune me out in that scenario is because c is definitely greater than 0.
Note that c is only fixed per value in the case where the algorithm for fusing information has linear time complexity O(N). We often use something like an extended Kalman filter (EKF) for sensor fusion. In that scenario, each additional value incurs an increasingly higher cost of computational effort to include it, so sensors with low usefulness are especially penalized. If I recall correctly, it is O(N^2). It’ll get to a point where it doesn’t matter how useful a sensor is, it would be irrational to try and include it because it’ll be prohibitively expensive to run the full computation.
If you’re worried about computational complexity, that’s OK. It’s not something that I mentioned because (surprisingly enough...) this isn’t something that any of the doctors discussed. If you like, let’s call that a “valid cost” just like the medical risks and financial/time costs of doing tests. The central issue is if it’s valid to worry about information causing harmful downstream medical decisions.
I’m sorry, but I just feel like we’ve moved the goal posts then.
I don’t see a lot of value in trying to disentangle the concept of information from 1.) costs to acquire that information, and 2.) costs to use that information, just to make some type of argument that a certain class of actor is behaving irrationally.
It starts to feel like “assume a spherical cow”, but we’re applying that simplification to the definition of what it means to be rational. First, it isn’t free to acquire information. But second, even if I assume for the sake of argument that the information is free, it still isn’t free to use it, because computation has costs.
if a theory of rational decision making doesn’t include that fact, it’ll come to conclusions that I think are absurd, like the idea that the most rational thing someone can do is acquire literally all available information before making any decision.
yes, your entire example of doctors is simply due to irrationality
So first you say this.
But then you start to backtrack
in the trivial sense that for a poor measure the posterior of a true positive remains far smaller than it being a false positive and may not motivate a decision, shrinking the VoI towards zero, which will frequently be so small as to not justify the cost of testing
And further admit
It is definitely the case that many tests cost too much for too little information and should not be run because the VoI is often zero (for a rational decision maker) and the test is simply a loss as it will not change any decisions.
But then you try to defend the initial claim, that the doctors are being irrational
Nevertheless, the value of free information is always greater than or equal to zero, and if free information makes you worse off, that implies somewhere there is an irrationality.
But we’ve already established that the tests are not free in the world we live in.
If you’re going to prove the doctors are being irrational in the world we live in, then you can’t change a core part of the problem statement. The tests do have costs—in time, in money, in available machines, in false positives that may result in surgeries or other actions with non-zero risk, and in a dozen other ways, some of which were alluded to by Dynomight, like the possibility of lawsuits.
My whole argument, which you said is “generally wrong”, is predicated on the fact that this information is not free. I don’t accept the notion that people are being irrational because they are making decisions based on the reality of the world where information is not free just because we can hypothesize about worlds where that information is free.
No. Dynomight already addressed this: if you have an unreliable sensor (ie. any sensor that has ever existed in the real world), then that simply reduces how useful it is, because it changes your posterior less than a more reliable one would. The VoI remains positive; I refer you to Ramsey and Savage on this particular point of decision theory.
All of your additional comments are generally wrong, and reflect an extremely rigid absolutist approach to making decisions. We use unreliable correlated measures all the time, this is in fact ‘technically true’ and that is is the point, and yes, your entire example of doctors is simply due to irrationality and does not refute the decision theory point and it has nothing to do with ‘being obvious it is a false positive’ except in the trivial sense that for a poor measure the posterior of a true positive remains far smaller than it being a false positive and may not motivate a decision, shrinking the VoI towards zero, which will frequently be so small as to not justify the cost of testing (explicitly pointed out by Dynomight). It is definitely the case that many tests cost too much for too little information and should not be run because the VoI is often zero (for a rational decision maker) and the test is simply a loss as it will not change any decisions. Nevertheless, the value of free information is always greater than or equal to zero, and if free information makes you worse off, that implies somewhere there is an irrationality.
(The really relevant problem with this in the context of medicine is that decision theory is considering single agents in a stochastic environment, an idealized physician ordering tests to try to optimize patient health, because game theory hasn’t been invented yet; when you bring in multiple agents with different goals and mechanisms like lawsuits, then free information can be quite harmful, but this too is not lost on most people, including Dynomight at the end.)
NOTE: I wrote this as a separate reply because it’s addressing your points about decision theory directly, and is not about the specific scenario discussed with the medical system.
I think the crux here is that you seem to be saying the usefulness of reading a sensor’s value is in some interval
[0, 1]
, where 1 represents that the value provided by the sensor is perfectly trustworthy and 0 is that the value provided by the sensor is totally useless; i.e., it’s random noise. Under this belief, you’re saying that it is always rational to acquire as many sensors as possible, because there is no downside to acquiring useless sensors. When you run your filter over all of the sensors, anything that has a usefulness of 0 is going to get dropped from the final result. Likewise, low-but-non-zero usefulness sensors are weighted accordingly in the final result.In my work, this is called sensor fusion. So far, so good.
I can argue that acquiring each sensor has a cost associated with it, but it seems like the idea of “free information” is intended to deflect that argument. Let us assume that the sensors are provided for free, and it’s just a question of “given an arbitrary number of sensors, with different usefulness, how many do you want to fuse when trying to model the correct world state?”
I think what you’ve said above implies that a rational actor should always want more sensors.
More sensors leads to more sensor values (“information”), and the rational actor will simply use the usefulness of each sensor (which for the sake of argument we’ll assume that they know exactly) when weighting each sensor value.
In the real world, I still disagree with this claim. Computational complexity[1] exists. There is a cost to interpreting, and fusing, an arbitrary number of sensor values. Each additional sensor, even if it was provided by free, is going to incur an actual cost in computation before that value can be used to make a decision. A rational actor would not accept an arbitrary number of useless sensors if it is going to take non-zero computational cycles to disregard them.
When you include the cost of computation, now the value of those sensors is in some interval
[0 - c, 1 - c]
, wherec
is how much it costs in computational effort[2] to include the sensor in your filter. In this world, sensors can have less than zero usefulness, i.e. it is actively detrimental to include the sensor in your filter. Your filter functions worse with that sensor than it does without it.I believe the only way out of this is to ignore computational complexity and assume that
c = 0
, but we know that isn’t true. Consider the trivial thought experiment of me sitting here and providing you a series of useless facts about a fictional D&D campaign I’m running like, “A miraksaur is a type of dinosaur native to the planet Eurid.”, except the facts never stop. How rational would it be for you to keep trying to enter each additional value into your world state? They’re totally irrelevant, but if we ignore computational costs, there’s no downside to doing so. The reason why you should be wise to tune me out in that scenario is becausec
is definitely greater than 0.https://en.wikipedia.org/wiki/Computational_complexity
Note that
c
is only fixed per value in the case where the algorithm for fusing information has linear time complexityO(N)
. We often use something like an extended Kalman filter (EKF) for sensor fusion. In that scenario, each additional value incurs an increasingly higher cost of computational effort to include it, so sensors with low usefulness are especially penalized. If I recall correctly, it isO(N^2)
. It’ll get to a point where it doesn’t matter how useful a sensor is, it would be irrational to try and include it because it’ll be prohibitively expensive to run the full computation.If you’re worried about computational complexity, that’s OK. It’s not something that I mentioned because (surprisingly enough...) this isn’t something that any of the doctors discussed. If you like, let’s call that a “valid cost” just like the medical risks and financial/time costs of doing tests. The central issue is if it’s valid to worry about information causing harmful downstream medical decisions.
I’m sorry, but I just feel like we’ve moved the goal posts then.
I don’t see a lot of value in trying to disentangle the concept of information from 1.) costs to acquire that information, and 2.) costs to use that information, just to make some type of argument that a certain class of actor is behaving irrationally.
It starts to feel like “assume a spherical cow”, but we’re applying that simplification to the definition of what it means to be rational. First, it isn’t free to acquire information. But second, even if I assume for the sake of argument that the information is free, it still isn’t free to use it, because computation has costs.
if a theory of rational decision making doesn’t include that fact, it’ll come to conclusions that I think are absurd, like the idea that the most rational thing someone can do is acquire literally all available information before making any decision.
So first you say this.
But then you start to backtrack
And further admit
But then you try to defend the initial claim, that the doctors are being irrational
But we’ve already established that the tests are not free in the world we live in.
If you’re going to prove the doctors are being irrational in the world we live in, then you can’t change a core part of the problem statement. The tests do have costs—in time, in money, in available machines, in false positives that may result in surgeries or other actions with non-zero risk, and in a dozen other ways, some of which were alluded to by Dynomight, like the possibility of lawsuits.
My whole argument, which you said is “generally wrong”, is predicated on the fact that this information is not free. I don’t accept the notion that people are being irrational because they are making decisions based on the reality of the world where information is not free just because we can hypothesize about worlds where that information is free.
Do you still disagree?