Utilitarians should generally be willing to accept losses of knowledge / epistemics for other resources, conditional on the expected value of the trade being positive.
[ not a utilitarian; discount my opinion appropriately ]
This hits one of the thorniest problems with Utilitarianism: different value-over-time expectations depending on timescales and assumptions.
If one is thinking truly long-term, it’s hard to imagine what resource is more valuable than knowledge and epistemics. I guess tradeoffs in WHICH knowledge to gain/lose have to be made, but that’s an in-category comparison, not a cross-category one. Oh, and trading it away to prevent total annihilation of all thinking/feeling beings is probably right.
It’s hard to imagine what resource is more valuable than knowledge and epistemics
I think my thinking is that for utilitarians, these are generally instrumental, not terminal values. Often they’re pretty important instrumental values, but this still would mean that they could be traded off in respect to the terminal values. Of course, if they are “highly important” instrumental values, then something very large would have to be offered for a trade to be worth it. (total annihilation being one example)
I think we’re agreed that resources, including knowledge, are instrumental (though as a human, I don’t always distinguish very closely). My point was that for very-long-term terminal values, knowledge and accuracy of evaluation (epistemics) are far more important than almost anything else.
It may be that there’s a declining marginal value for knowledge, as there is for most resources, and once you know enough to confidently make the tradeoffs, you should do so. But if you’re uncertain, go for the knowledge.
Non-Bayesian Utilitarian that are ambiguity averse sometimes need to sacrifice “expected utility” to gain more certainty (in quotes because that need not be well defined).
Doesn’t being willing to accept a trade *directly follow* from the expected value of the trade being positive? Isn’t that like, the *definition* of when you should be willing to accept a trade? The only disagreement would be how likely it is that losses of knowledge / epistemics are involved in positive value trades. (My guess is it does happen rarely.)
I’d generally say that, but wouldn’t be surprised if there were some who disagreed; who’s argument would be something like what-to-me would sound like a modification of utilitarianism, [utilitarianism+epistemic-terminal-values].
If you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless “expected value” is referring to the expected value of something other than your utility function, in which case it should’ve been specified.
I was doing what may be a poor steelman of my assumptions of how others would disagree; I don’t have a great sense of what people who would disagree would say at this point.
Only if the trade is voluntary. If the trade is forced (e.g. in healthcare) then you may have two bad options, and the option you do want is not on the table.
In general, I would agree with the above statement (and technically speaking, I have made such trade-offs). But I do want to point out that it’s important to consider what the loss of knowledge/epistemics entails. This is because certain epistemic sacrifices have minimal costs (I’m very confident that giving up FDT for CDT for the next 24 hours won’t affect me at all) and some have unbounded costs (if giving up materialism causes me to abandon cryonics, it’s hard to quantify how large of a blunder that would be). This is especially true of epistemics that allow to you be unboundedly exploited by an adversarial agent.
As a result, even when the absolute value looks positive to me, I’ll still try to avoid this kinds of trade-offs because certain black swans (ie bumping into an adversarial agent that exploits your lack of knowledge about something) make such bets very high risk.
This sounds pretty reasonable to me; it sounds like you’re basically trying to maximize expected value, but don’t always trust your initial intuitions, which seems quite reasonable.
Would I accept that processes that take into account resource constraints might be more effective? Certainly, thought I think of that as ‘starting the journey in a reasonable fashion’ rather than ‘going backwards’ as your statement brings to mind.
Basically, information that can be handled in “value of information” style calculations. So, if I learn information such that my accuracy of understanding the world increases, my knowledge is increased. For instance, if I learn the names of everyone in my extended family.
Ok, but in this case do you mean “loss of knowledge” as in “loss of knowledge harbored within the brain” or “loss of knowledge no matter where it’s stored, be it a book, brain, text file… etc” ?
Further more, does losing copies of a certain piece of knowledge count as loss of knowledge ? What about translations of said knowledge (in another language or another philosophical/mathematical framework) that doesn’t add any new information, just makes it accessible to a larger demographic ?
I was thinking the former, but I guess the latter could also be relevant/count. It seems like there’s no strict cut-off. I’d expect a utilitarian to accept trade-offs against all these kinds of knowledge, conditional on the total expected value being positive.
Well, the problem with the former (knowledge harbored within the brain) is that it’s very vague and hard to define.
If I have, say, a method to improve the efficacy of VX (an easily weaponizable nerve toxin). As a utilitarian I conclude this information is going to be harmful, I can purge it of my hard-drive, I can burn the papers I used to come up with this… etc.
But I can’t wipe my head clean of the information, at best I can resign to never talk about it to anyone and to not accord it much import, such that I may forget it. But that’s not destruction per-say, it’s closer to lying, not sharing the information with anyone (even if asked specifically), or to biasing your brain towards transmitting and remembering certain pieces of information (which we do all the time).
However I don’t see anything contentious with this case, nor with any other case of information-destruction, as long as it is for the greater utility.
I think in general people don’t advocate for destroying/forgetting information because:
a) It’s hard to do
b) As a general rule of thumb the accumulation of information seems to be a good thing, even if the utility of a specific piece of information is not obvious
But this is more of a heuristic, an exact principle.
I’d agree that the first one is generally pretty separated from common reality, but think it’s a useful thought experiment.
I was originally thinking of this more in terms of “removing useful information” than “removing expected-harmful information”, but good point; the latter could be interesting too.
Well,I think the “removing useful information” bit contradicts with utility to being with.
As in, if you are a utilitarian, useful information == helps maximize utility. Thus the trade-off is not possible.
I can think of some contrived examples where the trade-off is possible (e.g. where the information is harmful now but will be useful later), but in that case it’s so easy to “hide” information in the modern age, instead of destroying it entirely, that the problem seem too theoretical to me.
But at the end of the day, assuming you reached a contrived enough situation where the information must either be destroyed (or where hiding it devoid other people of the ability to discover further useful information), I think the utilitarian perspective has nothing fundamental against destroying it. However, no matter how hard I try, I can’t really think of a very relevant example where this could be the case.
One extreme case would be committing suicide because your secret is that important.
A less extreme case may be being OK with forgetting information; you’re losing value, but the cost to maintain it wouldn’t be worth it. (In this case the information is positive though)
Would anyone here disagree with the statement:
[ not a utilitarian; discount my opinion appropriately ]
This hits one of the thorniest problems with Utilitarianism: different value-over-time expectations depending on timescales and assumptions.
If one is thinking truly long-term, it’s hard to imagine what resource is more valuable than knowledge and epistemics. I guess tradeoffs in WHICH knowledge to gain/lose have to be made, but that’s an in-category comparison, not a cross-category one. Oh, and trading it away to prevent total annihilation of all thinking/feeling beings is probably right.
I think my thinking is that for utilitarians, these are generally instrumental, not terminal values. Often they’re pretty important instrumental values, but this still would mean that they could be traded off in respect to the terminal values. Of course, if they are “highly important” instrumental values, then something very large would have to be offered for a trade to be worth it. (total annihilation being one example)
I think we’re agreed that resources, including knowledge, are instrumental (though as a human, I don’t always distinguish very closely). My point was that for very-long-term terminal values, knowledge and accuracy of evaluation (epistemics) are far more important than almost anything else.
It may be that there’s a declining marginal value for knowledge, as there is for most resources, and once you know enough to confidently make the tradeoffs, you should do so. But if you’re uncertain, go for the knowledge.
Non-Bayesian Utilitarian that are ambiguity averse sometimes need to sacrifice “expected utility” to gain more certainty (in quotes because that need not be well defined).
Doesn’t being willing to accept a trade *directly follow* from the expected value of the trade being positive? Isn’t that like, the *definition* of when you should be willing to accept a trade? The only disagreement would be how likely it is that losses of knowledge / epistemics are involved in positive value trades. (My guess is it does happen rarely.)
I’d generally say that, but wouldn’t be surprised if there were some who disagreed; who’s argument would be something like what-to-me would sound like a modification of utilitarianism, [utilitarianism+epistemic-terminal-values].
If you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless “expected value” is referring to the expected value of something other than your utility function, in which case it should’ve been specified.
Yep, I would generally think so.
I was doing what may be a poor steelman of my assumptions of how others would disagree; I don’t have a great sense of what people who would disagree would say at this point.
Happiness + Knowledge. (A related question is, do people with these values drink?)
Only if the trade is voluntary. If the trade is forced (e.g. in healthcare) then you may have two bad options, and the option you do want is not on the table.
In general, I would agree with the above statement (and technically speaking, I have made such trade-offs). But I do want to point out that it’s important to consider what the loss of knowledge/epistemics entails. This is because certain epistemic sacrifices have minimal costs (I’m very confident that giving up FDT for CDT for the next 24 hours won’t affect me at all) and some have unbounded costs (if giving up materialism causes me to abandon cryonics, it’s hard to quantify how large of a blunder that would be). This is especially true of epistemics that allow to you be unboundedly exploited by an adversarial agent.
As a result, even when the absolute value looks positive to me, I’ll still try to avoid this kinds of trade-offs because certain black swans (ie bumping into an adversarial agent that exploits your lack of knowledge about something) make such bets very high risk.
This sounds pretty reasonable to me; it sounds like you’re basically trying to maximize expected value, but don’t always trust your initial intuitions, which seems quite reasonable.
[What “utilitarian” means could use some resolving, so I just treated this as “people”.]
I would disagree. I tried to find the relevant post in the sequences and found this along with it:
Would I accept that processes that take into account resource constraints might be more effective? Certainly, thought I think of that as ‘starting the journey in a reasonable fashion’ rather than ‘going backwards’ as your statement brings to mind.
How would you define loss of knowledge ?
Basically, information that can be handled in “value of information” style calculations. So, if I learn information such that my accuracy of understanding the world increases, my knowledge is increased. For instance, if I learn the names of everyone in my extended family.
Ok, but in this case do you mean “loss of knowledge” as in “loss of knowledge harbored within the brain” or “loss of knowledge no matter where it’s stored, be it a book, brain, text file… etc” ?
Further more, does losing copies of a certain piece of knowledge count as loss of knowledge ? What about translations of said knowledge (in another language or another philosophical/mathematical framework) that doesn’t add any new information, just makes it accessible to a larger demographic ?
I was thinking the former, but I guess the latter could also be relevant/count. It seems like there’s no strict cut-off. I’d expect a utilitarian to accept trade-offs against all these kinds of knowledge, conditional on the total expected value being positive.
Well, the problem with the former (knowledge harbored within the brain) is that it’s very vague and hard to define.
If I have, say, a method to improve the efficacy of VX (an easily weaponizable nerve toxin). As a utilitarian I conclude this information is going to be harmful, I can purge it of my hard-drive, I can burn the papers I used to come up with this… etc.
But I can’t wipe my head clean of the information, at best I can resign to never talk about it to anyone and to not accord it much import, such that I may forget it. But that’s not destruction per-say, it’s closer to lying, not sharing the information with anyone (even if asked specifically), or to biasing your brain towards transmitting and remembering certain pieces of information (which we do all the time).
However I don’t see anything contentious with this case, nor with any other case of information-destruction, as long as it is for the greater utility.
I think in general people don’t advocate for destroying/forgetting information because:
a) It’s hard to do
b) As a general rule of thumb the accumulation of information seems to be a good thing, even if the utility of a specific piece of information is not obvious
But this is more of a heuristic, an exact principle.
I’d agree that the first one is generally pretty separated from common reality, but think it’s a useful thought experiment.
I was originally thinking of this more in terms of “removing useful information” than “removing expected-harmful information”, but good point; the latter could be interesting too.
Well,I think the “removing useful information” bit contradicts with utility to being with.
As in, if you are a utilitarian, useful information == helps maximize utility. Thus the trade-off is not possible.
I can think of some contrived examples where the trade-off is possible (e.g. where the information is harmful now but will be useful later), but in that case it’s so easy to “hide” information in the modern age, instead of destroying it entirely, that the problem seem too theoretical to me.
But at the end of the day, assuming you reached a contrived enough situation where the information must either be destroyed (or where hiding it devoid other people of the ability to discover further useful information), I think the utilitarian perspective has nothing fundamental against destroying it. However, no matter how hard I try, I can’t really think of a very relevant example where this could be the case.
One extreme case would be committing suicide because your secret is that important.
A less extreme case may be being OK with forgetting information; you’re losing value, but the cost to maintain it wouldn’t be worth it. (In this case the information is positive though)
There’s some related academic work around this here:
https://www.princeton.edu/~tkelly/papers/epistemicasinstrumental.pdf https://core.ac.uk/download/pdf/33752524.pdf
They don’t specifically focus on utilitarians, but the arguments are still relevant.
Also, this post is relevant: https://www.lesswrong.com/posts/dMzALgLJk4JiPjSBg/epistemic-vs-instrumental-rationality-approximations