Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.
For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture—it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).
For the record, EY’s position (and mine) is that torture is obviously preferable. It’s true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:
if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X’. This is unreasonable, since X is only slightly higher than X’, while you are forcing 10 times as many people to suffer the torture.
On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn’t necessary to get approximately correct answers.
Well small positive probabilities need not be finite if we have a non-archimedean utility framework.
Infinidesimal times an inifinite number might yield a finite number that would be on equal footing with familiar expected values that would trade sensibly.
And it might help that the infinidesimals might compare mostly against each other. You compare the danger of driving against the dangers of being in a kitchen. If you find that driving is twice as dangerous it means you need to spend half the time to drive to accomplish something rather than do it in a kitchen rather than categorically always doing things in a kitchen.
I guess the relevance of waste might be important. If you could choose 0 chance of death you would take that. But given that you are unable to choose that you choose among the death optimums. Sometimes further research is not possible.
Regarding your comments on SPECKS preferable to TORTURE, I think that misses the argument they made. The reason you have to prefer 10N at X to N at X’ at some point, is that a speck counts as a level of torture. That’s exactly what OP was arguing against.
The OP didn’t give any argument for SPECKS>TORTURE, they said it was “not the point of the post”. I agree my argument is phrased loosely, and that it’s reasonable to say that a speck isn’t a form of torture. So replace “torture” with “pain or annoyance of some kind”. It’s not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.
Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.
I don’t think this is true. As an example, when I wake up in the morning I make the decision between granola and cereal for breakfast. Universe Destruction is undoubtedly high up on the severity scale (certainly higher than crunch satisfaction utility), so your argument suggests that I should spend time researching which choice is more likely to impact that. However, the difference in expected impact in these options is so averse to detection that, despite the fact that I literally act on this choice every single day of my life, it would never be worth the time to research breakfast foods instead of other choices which have stronger (i.e. measurable by the human mind) impacts on Universe Destruction.
This is not a bug, but an incredible feature of the non-Archimedean framework. It allows you to evaluate choices only on the highest severity level at which they actually occur, which is in fact how humans seem to make their decisions already, to some approximation.
As for the car example, your analysis seems sound (assuming there’s no positive expected utility at or above the severity level of car crash injuries to counterbalance it, which is not necessarily the case—e.g. driving somewhere increases the chance that you meet more people and consequently find the love(s) of your life, which may well be worth a broken limb or two. Alternatively, if you are driving to a workshop on AI risk then you may believe yourself to be reducing the expected disutility from unaligned AI, which appears to be incomparable with a car crash). But, forgiving my digression and argument of the hypothetical: the claim that not driving is (often) preferable to driving feels much more reasonable to me than the claim that some number of dust specks is worse than torture.
if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X’. This is unreasonable, since X is only slightly higher than X’, while you are forcing 10 times as many people to suffer the torture
I’m not sure I understand this properly. To clarify, I don’t believe that any non-torture suffering is incomparable with torture, merely that dust specks are. I think “slightly higher level” is potentially misleading here—if it’s in a different severity class, then by definition there is nothing slight about it. Depending on the order type, there may not even be a level immediately above torture X, and it may be that there are infinitely many severity classes sitting between X and any distinct X’ (think: R or Q).
If you’re choosing between granola and cereal, then it might be that the granola manufacturer is owned by a multinational conglomerate that also has vested interests in the oil industry, and therefore, buying granola contributes to climate change that has consequences of much larger severity than your usual considerations for granola vs. cereal. Of course there might also be arguments in favor of cereal. But, if you take your non-Archimedean preferences seriously, it is absolutely worth the time to write down all of the arguments in favor of both options, and then choose the side which has the advantage in the largest severity class, however microscopic that advantage is.
I agree that small decisions have high-severity impacts, my point was that it is only worth the time to evaluate that impact if there aren’t other decisions I could spend that time making which have much greater impact and for which my decision time will be more effectively allocated. This is a comment about how using the non-Archimedean framework goes in practice. Certainly, if we had infinite compute and time to make every decision, we should focus on the most severe impacts of those decisions, but that is not the world we are in (and if we were, it would change quite a lot of other things too, effectively eliminating empirical uncertainty and the need to take expectations at all).
Alright, so if you already know any fact which connects cereal/granola to high severity consequences (however slim the connection is) then you should choose based on those facts only.
My intuition is, you will never have a coherent theory of agency with non-Archimedean utility functions. I will change my mind when I see something that looks like such the beginning of such a theory (e.g. some reasonable analysis of reinforcement learning for non-Archimedean utility).
If we had infinite compute, that would not eliminate empirical uncertainty. There are many things you cannot compute because you just don’t have enough information. This is why in learning theory sample complexity is distinct from computational complexity, and applies to algorithms with unlimited computational resources. So, you would definitely still need to take expectations.
Why does this pose an issue for reinforcement learning? Forgive my ignorance, I do not have a background in the subject. Though I don’t believe that I have information which distinguishes cereal/granola in terms of which has stronger highest-severity consequences (given the smallness of those numbers and my inability to conceive of them, I strongly suspect anything I could come up with would exclusively represent epistemic and not aleatoric uncertainty), even if I accept it then the theory would tell me, correctly, that I should act based on that level. If that seems wrong, then it’s evidence we’ve incorrectly identified an implicit severity class in our imagination of the hypothetical, not that severity classes are incoherent (i.e. if I really have reason to believe that eating cereal even slightly increases the chance of Universe Destruction compared to eating granola, shouldn’t that make my decision for me?)
I would argue that many actions are sufficiently isolated such that, while they’ll certainly have high-severity ripple effects, we have no reason to believe that on expectation the high-severity consequences are worse than they would have been for a different action.
If the non-Archimedean framework really does “collapse” to an Archimedean one in practice, that’s fine with me. It exists to respond to a theoretical question about qualitatively different forms of utility, without biting a terribly strange bullet. Collapsing the utility function would mean assigning weight 0 to all but the maximal severity level, which seems very bad in that we certainly prefer no dust specks in our eyes to dust specks (ceteris paribus), and this should be accurately reflected in our evaluation of world states, even if the ramped function does lead to the same action preferences in many/most real-life scenarios for a sufficiently discerning agent (which maybe AI will be, but I know I am not).
If we had infinite compute, that would not eliminate empirical uncertainty. There are many things you cannot compute because you just don’t have enough information. This is why in learning theory sample complexity is distinct from computational complexity, and applies to algorithms with unlimited computational resources. So, you would definitely still need to take expectations.
Thanks for letting me know about this! Another thing I haven’t studied.
Reinforcement learning with rewards or punishments that can have an infinite magnitude would seem to make intuitive sense for me. The buck is then kicked to reasoning whether it’s ever reasonable to give a sample a post-finite reward. Say that there are pictures label as either “woman”, “girl”,”boy” or “man” and labeling a boy a man or a man a boy would get you a Small reward while labeling a man a man would get you a Large reward where Large is infinite respect with respect to Small. With a finite version some “boy” vs “girl” weight could overcome a “man” vs “girl” weight which might be undesirable behaviour (if you strictly care about gender discrimination with no tradeoff for age discrimination).
Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.
For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture—it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).
For the record, EY’s position (and mine) is that torture is obviously preferable. It’s true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:
if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X’. This is unreasonable, since X is only slightly higher than X’, while you are forcing 10 times as many people to suffer the torture.
On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn’t necessary to get approximately correct answers.
Well small positive probabilities need not be finite if we have a non-archimedean utility framework.
Infinidesimal times an inifinite number might yield a finite number that would be on equal footing with familiar expected values that would trade sensibly.
And it might help that the infinidesimals might compare mostly against each other. You compare the danger of driving against the dangers of being in a kitchen. If you find that driving is twice as dangerous it means you need to spend half the time to drive to accomplish something rather than do it in a kitchen rather than categorically always doing things in a kitchen.
I guess the relevance of waste might be important. If you could choose 0 chance of death you would take that. But given that you are unable to choose that you choose among the death optimums. Sometimes further research is not possible.
Regarding your comments on SPECKS preferable to TORTURE, I think that misses the argument they made. The reason you have to prefer 10N at X to N at X’ at some point, is that a speck counts as a level of torture. That’s exactly what OP was arguing against.
The OP didn’t give any argument for SPECKS>TORTURE, they said it was “not the point of the post”. I agree my argument is phrased loosely, and that it’s reasonable to say that a speck isn’t a form of torture. So replace “torture” with “pain or annoyance of some kind”. It’s not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.
I don’t think this is true. As an example, when I wake up in the morning I make the decision between granola and cereal for breakfast. Universe Destruction is undoubtedly high up on the severity scale (certainly higher than crunch satisfaction utility), so your argument suggests that I should spend time researching which choice is more likely to impact that. However, the difference in expected impact in these options is so averse to detection that, despite the fact that I literally act on this choice every single day of my life, it would never be worth the time to research breakfast foods instead of other choices which have stronger (i.e. measurable by the human mind) impacts on Universe Destruction.
This is not a bug, but an incredible feature of the non-Archimedean framework. It allows you to evaluate choices only on the highest severity level at which they actually occur, which is in fact how humans seem to make their decisions already, to some approximation.
As for the car example, your analysis seems sound (assuming there’s no positive expected utility at or above the severity level of car crash injuries to counterbalance it, which is not necessarily the case—e.g. driving somewhere increases the chance that you meet more people and consequently find the love(s) of your life, which may well be worth a broken limb or two. Alternatively, if you are driving to a workshop on AI risk then you may believe yourself to be reducing the expected disutility from unaligned AI, which appears to be incomparable with a car crash). But, forgiving my digression and argument of the hypothetical: the claim that not driving is (often) preferable to driving feels much more reasonable to me than the claim that some number of dust specks is worse than torture.
I’m not sure I understand this properly. To clarify, I don’t believe that any non-torture suffering is incomparable with torture, merely that dust specks are. I think “slightly higher level” is potentially misleading here—if it’s in a different severity class, then by definition there is nothing slight about it. Depending on the order type, there may not even be a level immediately above torture X, and it may be that there are infinitely many severity classes sitting between X and any distinct X’ (think: R or Q).
If you’re choosing between granola and cereal, then it might be that the granola manufacturer is owned by a multinational conglomerate that also has vested interests in the oil industry, and therefore, buying granola contributes to climate change that has consequences of much larger severity than your usual considerations for granola vs. cereal. Of course there might also be arguments in favor of cereal. But, if you take your non-Archimedean preferences seriously, it is absolutely worth the time to write down all of the arguments in favor of both options, and then choose the side which has the advantage in the largest severity class, however microscopic that advantage is.
I agree that small decisions have high-severity impacts, my point was that it is only worth the time to evaluate that impact if there aren’t other decisions I could spend that time making which have much greater impact and for which my decision time will be more effectively allocated. This is a comment about how using the non-Archimedean framework goes in practice. Certainly, if we had infinite compute and time to make every decision, we should focus on the most severe impacts of those decisions, but that is not the world we are in (and if we were, it would change quite a lot of other things too, effectively eliminating empirical uncertainty and the need to take expectations at all).
Alright, so if you already know any fact which connects cereal/granola to high severity consequences (however slim the connection is) then you should choose based on those facts only.
My intuition is, you will never have a coherent theory of agency with non-Archimedean utility functions. I will change my mind when I see something that looks like such the beginning of such a theory (e.g. some reasonable analysis of reinforcement learning for non-Archimedean utility).
If we had infinite compute, that would not eliminate empirical uncertainty. There are many things you cannot compute because you just don’t have enough information. This is why in learning theory sample complexity is distinct from computational complexity, and applies to algorithms with unlimited computational resources. So, you would definitely still need to take expectations.
Why does this pose an issue for reinforcement learning? Forgive my ignorance, I do not have a background in the subject. Though I don’t believe that I have information which distinguishes cereal/granola in terms of which has stronger highest-severity consequences (given the smallness of those numbers and my inability to conceive of them, I strongly suspect anything I could come up with would exclusively represent epistemic and not aleatoric uncertainty), even if I accept it then the theory would tell me, correctly, that I should act based on that level. If that seems wrong, then it’s evidence we’ve incorrectly identified an implicit severity class in our imagination of the hypothetical, not that severity classes are incoherent (i.e. if I really have reason to believe that eating cereal even slightly increases the chance of Universe Destruction compared to eating granola, shouldn’t that make my decision for me?)
I would argue that many actions are sufficiently isolated such that, while they’ll certainly have high-severity ripple effects, we have no reason to believe that on expectation the high-severity consequences are worse than they would have been for a different action.
If the non-Archimedean framework really does “collapse” to an Archimedean one in practice, that’s fine with me. It exists to respond to a theoretical question about qualitatively different forms of utility, without biting a terribly strange bullet. Collapsing the utility function would mean assigning weight 0 to all but the maximal severity level, which seems very bad in that we certainly prefer no dust specks in our eyes to dust specks (ceteris paribus), and this should be accurately reflected in our evaluation of world states, even if the ramped function does lead to the same action preferences in many/most real-life scenarios for a sufficiently discerning agent (which maybe AI will be, but I know I am not).
Thanks for letting me know about this! Another thing I haven’t studied.
Reinforcement learning with rewards or punishments that can have an infinite magnitude would seem to make intuitive sense for me. The buck is then kicked to reasoning whether it’s ever reasonable to give a sample a post-finite reward. Say that there are pictures label as either “woman”, “girl”,”boy” or “man” and labeling a boy a man or a man a boy would get you a Small reward while labeling a man a man would get you a Large reward where Large is infinite respect with respect to Small. With a finite version some “boy” vs “girl” weight could overcome a “man” vs “girl” weight which might be undesirable behaviour (if you strictly care about gender discrimination with no tradeoff for age discrimination).