There’s still a value of the information itself – or at least it seems to me like there is, if only in principle – even after it’s been parsed/processed and is ready to ‘use’, e.g. for reasoning, updating belief networks, etc..
I gave the example of a Kalman filter in my other post. A Kalman filter is similar to recursive Bayesian estimation. It’s computationally intensive to run for an arbitrary number of values due to how it scales in complexity. If you have a faster algorithm for doing this, then you can revolutionize the field of autonomous systems + self-driving vehicles + robotics + etc.
The fact that “in principle” information provides value doesn’t matter, because the very example you gave of “updating belief networks” is exactly what a Kalman filter captures, and that’s what I’m saying is limiting how much information you can realistically handle. At some point I have to say, look, I can reasonably calculate a new world state based on 20 pieces of data. But I can’t do it if you ask me to look at 2000 pieces of data, at least not using the same optimal algorithm that I could run for 20 pieces of data. The time-complexity of the algorithm for updating my world state makes it prohibitively expensive to do that.
This really matters. If we pretend that agents can update their world state without incurring a cost of computation, and that it’s the same computational cost to update a world state based on 20 measurements as it would take for 2000 measurements, or if we pretend it’s only a linear cost and not something like N^2, then yes, you’re right, more information is always good.
But if there are computational costs, and they do not scale linearly (like a Kalman filter), then there can be negative value associated with trying to include low quality information in the update of your world state.
It is possible that the doctors are behaving irrationally, but I don’t think any of the arguments here prove it. Similar to what mu says on their post here.
You’re not wrong but you’re like deliberately missing the point!
You even admit the point:
The fact that “in principle” information provides value doesn’t matter
Yes, the point was just that ‘in principle’, any information provides value.
I think maybe what’s missing is that the ‘in principle point’ is deliberately, to make the point ‘sharper’, ignoring costs, which are, by the time you have used some information, also ‘sunk costs’.
The point is not that there are no costs or that the total value of benefits always exceeds the corresponding total anti-value of costs. The ‘info profit’ is not always positive!
The point is that the benefits are always (strictly) positive – in principle.
Nope!
That’s a cost to use or process the information.
There’s still a value of the information itself – or at least it seems to me like there is, if only in principle – even after it’s been parsed/processed and is ready to ‘use’, e.g. for reasoning, updating belief networks, etc..
Then I’m not sure what our disagreement is.
I gave the example of a Kalman filter in my other post. A Kalman filter is similar to recursive Bayesian estimation. It’s computationally intensive to run for an arbitrary number of values due to how it scales in complexity. If you have a faster algorithm for doing this, then you can revolutionize the field of autonomous systems + self-driving vehicles + robotics + etc.
The fact that “in principle” information provides value doesn’t matter, because the very example you gave of “updating belief networks” is exactly what a Kalman filter captures, and that’s what I’m saying is limiting how much information you can realistically handle. At some point I have to say, look, I can reasonably calculate a new world state based on 20 pieces of data. But I can’t do it if you ask me to look at 2000 pieces of data, at least not using the same optimal algorithm that I could run for 20 pieces of data. The time-complexity of the algorithm for updating my world state makes it prohibitively expensive to do that.
This really matters. If we pretend that agents can update their world state without incurring a cost of computation, and that it’s the same computational cost to update a world state based on 20 measurements as it would take for 2000 measurements, or if we pretend it’s only a linear cost and not something like N^2, then yes, you’re right, more information is always good.
But if there are computational costs, and they do not scale linearly (like a Kalman filter), then there can be negative value associated with trying to include low quality information in the update of your world state.
It is possible that the doctors are behaving irrationally, but I don’t think any of the arguments here prove it. Similar to what mu says on their post here.
You’re not wrong but you’re like deliberately missing the point!
You even admit the point:
Yes, the point was just that ‘in principle’, any information provides value.
I think maybe what’s missing is that the ‘in principle point’ is deliberately, to make the point ‘sharper’, ignoring costs, which are, by the time you have used some information, also ‘sunk costs’.
The point is not that there are no costs or that the total value of benefits always exceeds the corresponding total anti-value of costs. The ‘info profit’ is not always positive!
The point is that the benefits are always (strictly) positive – in principle.