NOTE: I wrote this as a separate reply because it’s addressing your points about decision theory directly, and is not about the specific scenario discussed with the medical system.
if you have an unreliable sensor (ie. any sensor that has ever existed in the real world), then that simply reduces how useful it is, because it changes your posterior less than a more reliable one would.
I think the crux here is that you seem to be saying the usefulness of reading a sensor’s value is in some interval [0, 1], where 1 represents that the value provided by the sensor is perfectly trustworthy and 0 is that the value provided by the sensor is totally useless; i.e., it’s random noise. Under this belief, you’re saying that it is always rational to acquire as many sensors as possible, because there is no downside to acquiring useless sensors. When you run your filter over all of the sensors, anything that has a usefulness of 0 is going to get dropped from the final result. Likewise, low-but-non-zero usefulness sensors are weighted accordingly in the final result.
In my work, this is called sensor fusion. So far, so good.
I can argue that acquiring each sensor has a cost associated with it, but it seems like the idea of “free information” is intended to deflect that argument. Let us assume that the sensors are provided for free, and it’s just a question of “given an arbitrary number of sensors, with different usefulness, how many do you want to fuse when trying to model the correct world state?”
I think what you’ve said above implies that a rational actor should always want more sensors.
Nevertheless, the value of free information is always greater than or equal to zero, and if free information makes you worse off, that implies somewhere there is an irrationality.
More sensors leads to more sensor values (“information”), and the rational actor will simply use the usefulness of each sensor (which for the sake of argument we’ll assume that they know exactly) when weighting each sensor value.
In the real world, I still disagree with this claim. Computational complexity[1] exists. There is a cost to interpreting, and fusing, an arbitrary number of sensor values. Each additional sensor, even if it was provided by free, is going to incur an actual cost in computation before that value can be used to make a decision. A rational actor would not accept an arbitrary number of useless sensors if it is going to take non-zero computational cycles to disregard them.
When you include the cost of computation, now the value of those sensors is in some interval [0 - c, 1 - c], where c is how much it costs in computational effort[2] to include the sensor in your filter. In this world, sensors can have less than zero usefulness, i.e. it is actively detrimental to include the sensor in your filter. Your filter functions worse with that sensor than it does without it.
I believe the only way out of this is to ignore computational complexity and assume that c = 0, but we know that isn’t true. Consider the trivial thought experiment of me sitting here and providing you a series of useless facts about a fictional D&D campaign I’m running like, “A miraksaur is a type of dinosaur native to the planet Eurid.”, except the facts never stop. How rational would it be for you to keep trying to enter each additional value into your world state? They’re totally irrelevant, but if we ignore computational costs, there’s no downside to doing so. The reason why you should be wise to tune me out in that scenario is because c is definitely greater than 0.
Note that c is only fixed per value in the case where the algorithm for fusing information has linear time complexity O(N). We often use something like an extended Kalman filter (EKF) for sensor fusion. In that scenario, each additional value incurs an increasingly higher cost of computational effort to include it, so sensors with low usefulness are especially penalized. If I recall correctly, it is O(N^2). It’ll get to a point where it doesn’t matter how useful a sensor is, it would be irrational to try and include it because it’ll be prohibitively expensive to run the full computation.
If you’re worried about computational complexity, that’s OK. It’s not something that I mentioned because (surprisingly enough...) this isn’t something that any of the doctors discussed. If you like, let’s call that a “valid cost” just like the medical risks and financial/time costs of doing tests. The central issue is if it’s valid to worry about information causing harmful downstream medical decisions.
I’m sorry, but I just feel like we’ve moved the goal posts then.
I don’t see a lot of value in trying to disentangle the concept of information from 1.) costs to acquire that information, and 2.) costs to use that information, just to make some type of argument that a certain class of actor is behaving irrationally.
It starts to feel like “assume a spherical cow”, but we’re applying that simplification to the definition of what it means to be rational. First, it isn’t free to acquire information. But second, even if I assume for the sake of argument that the information is free, it still isn’t free to use it, because computation has costs.
if a theory of rational decision making doesn’t include that fact, it’ll come to conclusions that I think are absurd, like the idea that the most rational thing someone can do is acquire literally all available information before making any decision.
NOTE: I wrote this as a separate reply because it’s addressing your points about decision theory directly, and is not about the specific scenario discussed with the medical system.
I think the crux here is that you seem to be saying the usefulness of reading a sensor’s value is in some interval
[0, 1]
, where 1 represents that the value provided by the sensor is perfectly trustworthy and 0 is that the value provided by the sensor is totally useless; i.e., it’s random noise. Under this belief, you’re saying that it is always rational to acquire as many sensors as possible, because there is no downside to acquiring useless sensors. When you run your filter over all of the sensors, anything that has a usefulness of 0 is going to get dropped from the final result. Likewise, low-but-non-zero usefulness sensors are weighted accordingly in the final result.In my work, this is called sensor fusion. So far, so good.
I can argue that acquiring each sensor has a cost associated with it, but it seems like the idea of “free information” is intended to deflect that argument. Let us assume that the sensors are provided for free, and it’s just a question of “given an arbitrary number of sensors, with different usefulness, how many do you want to fuse when trying to model the correct world state?”
I think what you’ve said above implies that a rational actor should always want more sensors.
More sensors leads to more sensor values (“information”), and the rational actor will simply use the usefulness of each sensor (which for the sake of argument we’ll assume that they know exactly) when weighting each sensor value.
In the real world, I still disagree with this claim. Computational complexity[1] exists. There is a cost to interpreting, and fusing, an arbitrary number of sensor values. Each additional sensor, even if it was provided by free, is going to incur an actual cost in computation before that value can be used to make a decision. A rational actor would not accept an arbitrary number of useless sensors if it is going to take non-zero computational cycles to disregard them.
When you include the cost of computation, now the value of those sensors is in some interval
[0 - c, 1 - c]
, wherec
is how much it costs in computational effort[2] to include the sensor in your filter. In this world, sensors can have less than zero usefulness, i.e. it is actively detrimental to include the sensor in your filter. Your filter functions worse with that sensor than it does without it.I believe the only way out of this is to ignore computational complexity and assume that
c = 0
, but we know that isn’t true. Consider the trivial thought experiment of me sitting here and providing you a series of useless facts about a fictional D&D campaign I’m running like, “A miraksaur is a type of dinosaur native to the planet Eurid.”, except the facts never stop. How rational would it be for you to keep trying to enter each additional value into your world state? They’re totally irrelevant, but if we ignore computational costs, there’s no downside to doing so. The reason why you should be wise to tune me out in that scenario is becausec
is definitely greater than 0.https://en.wikipedia.org/wiki/Computational_complexity
Note that
c
is only fixed per value in the case where the algorithm for fusing information has linear time complexityO(N)
. We often use something like an extended Kalman filter (EKF) for sensor fusion. In that scenario, each additional value incurs an increasingly higher cost of computational effort to include it, so sensors with low usefulness are especially penalized. If I recall correctly, it isO(N^2)
. It’ll get to a point where it doesn’t matter how useful a sensor is, it would be irrational to try and include it because it’ll be prohibitively expensive to run the full computation.If you’re worried about computational complexity, that’s OK. It’s not something that I mentioned because (surprisingly enough...) this isn’t something that any of the doctors discussed. If you like, let’s call that a “valid cost” just like the medical risks and financial/time costs of doing tests. The central issue is if it’s valid to worry about information causing harmful downstream medical decisions.
I’m sorry, but I just feel like we’ve moved the goal posts then.
I don’t see a lot of value in trying to disentangle the concept of information from 1.) costs to acquire that information, and 2.) costs to use that information, just to make some type of argument that a certain class of actor is behaving irrationally.
It starts to feel like “assume a spherical cow”, but we’re applying that simplification to the definition of what it means to be rational. First, it isn’t free to acquire information. But second, even if I assume for the sake of argument that the information is free, it still isn’t free to use it, because computation has costs.
if a theory of rational decision making doesn’t include that fact, it’ll come to conclusions that I think are absurd, like the idea that the most rational thing someone can do is acquire literally all available information before making any decision.