Basically, information that can be handled in “value of information” style calculations. So, if I learn information such that my accuracy of understanding the world increases, my knowledge is increased. For instance, if I learn the names of everyone in my extended family.
Ok, but in this case do you mean “loss of knowledge” as in “loss of knowledge harbored within the brain” or “loss of knowledge no matter where it’s stored, be it a book, brain, text file… etc” ?
Further more, does losing copies of a certain piece of knowledge count as loss of knowledge ? What about translations of said knowledge (in another language or another philosophical/mathematical framework) that doesn’t add any new information, just makes it accessible to a larger demographic ?
I was thinking the former, but I guess the latter could also be relevant/count. It seems like there’s no strict cut-off. I’d expect a utilitarian to accept trade-offs against all these kinds of knowledge, conditional on the total expected value being positive.
Well, the problem with the former (knowledge harbored within the brain) is that it’s very vague and hard to define.
If I have, say, a method to improve the efficacy of VX (an easily weaponizable nerve toxin). As a utilitarian I conclude this information is going to be harmful, I can purge it of my hard-drive, I can burn the papers I used to come up with this… etc.
But I can’t wipe my head clean of the information, at best I can resign to never talk about it to anyone and to not accord it much import, such that I may forget it. But that’s not destruction per-say, it’s closer to lying, not sharing the information with anyone (even if asked specifically), or to biasing your brain towards transmitting and remembering certain pieces of information (which we do all the time).
However I don’t see anything contentious with this case, nor with any other case of information-destruction, as long as it is for the greater utility.
I think in general people don’t advocate for destroying/forgetting information because:
a) It’s hard to do
b) As a general rule of thumb the accumulation of information seems to be a good thing, even if the utility of a specific piece of information is not obvious
But this is more of a heuristic, an exact principle.
I’d agree that the first one is generally pretty separated from common reality, but think it’s a useful thought experiment.
I was originally thinking of this more in terms of “removing useful information” than “removing expected-harmful information”, but good point; the latter could be interesting too.
Well,I think the “removing useful information” bit contradicts with utility to being with.
As in, if you are a utilitarian, useful information == helps maximize utility. Thus the trade-off is not possible.
I can think of some contrived examples where the trade-off is possible (e.g. where the information is harmful now but will be useful later), but in that case it’s so easy to “hide” information in the modern age, instead of destroying it entirely, that the problem seem too theoretical to me.
But at the end of the day, assuming you reached a contrived enough situation where the information must either be destroyed (or where hiding it devoid other people of the ability to discover further useful information), I think the utilitarian perspective has nothing fundamental against destroying it. However, no matter how hard I try, I can’t really think of a very relevant example where this could be the case.
One extreme case would be committing suicide because your secret is that important.
A less extreme case may be being OK with forgetting information; you’re losing value, but the cost to maintain it wouldn’t be worth it. (In this case the information is positive though)
How would you define loss of knowledge ?
Basically, information that can be handled in “value of information” style calculations. So, if I learn information such that my accuracy of understanding the world increases, my knowledge is increased. For instance, if I learn the names of everyone in my extended family.
Ok, but in this case do you mean “loss of knowledge” as in “loss of knowledge harbored within the brain” or “loss of knowledge no matter where it’s stored, be it a book, brain, text file… etc” ?
Further more, does losing copies of a certain piece of knowledge count as loss of knowledge ? What about translations of said knowledge (in another language or another philosophical/mathematical framework) that doesn’t add any new information, just makes it accessible to a larger demographic ?
I was thinking the former, but I guess the latter could also be relevant/count. It seems like there’s no strict cut-off. I’d expect a utilitarian to accept trade-offs against all these kinds of knowledge, conditional on the total expected value being positive.
Well, the problem with the former (knowledge harbored within the brain) is that it’s very vague and hard to define.
If I have, say, a method to improve the efficacy of VX (an easily weaponizable nerve toxin). As a utilitarian I conclude this information is going to be harmful, I can purge it of my hard-drive, I can burn the papers I used to come up with this… etc.
But I can’t wipe my head clean of the information, at best I can resign to never talk about it to anyone and to not accord it much import, such that I may forget it. But that’s not destruction per-say, it’s closer to lying, not sharing the information with anyone (even if asked specifically), or to biasing your brain towards transmitting and remembering certain pieces of information (which we do all the time).
However I don’t see anything contentious with this case, nor with any other case of information-destruction, as long as it is for the greater utility.
I think in general people don’t advocate for destroying/forgetting information because:
a) It’s hard to do
b) As a general rule of thumb the accumulation of information seems to be a good thing, even if the utility of a specific piece of information is not obvious
But this is more of a heuristic, an exact principle.
I’d agree that the first one is generally pretty separated from common reality, but think it’s a useful thought experiment.
I was originally thinking of this more in terms of “removing useful information” than “removing expected-harmful information”, but good point; the latter could be interesting too.
Well,I think the “removing useful information” bit contradicts with utility to being with.
As in, if you are a utilitarian, useful information == helps maximize utility. Thus the trade-off is not possible.
I can think of some contrived examples where the trade-off is possible (e.g. where the information is harmful now but will be useful later), but in that case it’s so easy to “hide” information in the modern age, instead of destroying it entirely, that the problem seem too theoretical to me.
But at the end of the day, assuming you reached a contrived enough situation where the information must either be destroyed (or where hiding it devoid other people of the ability to discover further useful information), I think the utilitarian perspective has nothing fundamental against destroying it. However, no matter how hard I try, I can’t really think of a very relevant example where this could be the case.
One extreme case would be committing suicide because your secret is that important.
A less extreme case may be being OK with forgetting information; you’re losing value, but the cost to maintain it wouldn’t be worth it. (In this case the information is positive though)