The advice of this post seems to be advice on the margin (i.e., assuming everything else is held constant), which seems reasonable given that this one post won’t change collective behavior by much.
So the question isn’t “what happens if everyone stockpiles food?” but rather, “do we expect enough people to stockpile food that stockpiling more food will lead to bad consequences?”. I don’t know the answer to that one.
The first question for me is are people starving in Wuhan due to the outbreak?
Answer is no, as of now, though food situation is uncomfortable. (my wife has relatives there she’s in contact with). Trucks come to apartment complexes and people pick up.
I’m not sure the analogy translates well to US though. For better or worse us people are less organized. Also large % population live in suburbs where such deliveries are not feasible.
OTOH we have an excellent general delivery system in Amazon, UPS etc.
I’m suburban and at least one of the local grocers has a delivery—some of the others offer online order and then pickup from a locker or they will bring to the car.
I think the other thing is that in the suburban setting you already mitigate some of the risk because you simply don’t get as close to each other as is the case with urban living—I don’t get on the same elevator as everyone else on the floor or in the building generally. (Though the condo residential-retail-commercial model is starting to appear.)
I think if you live in any of the big US cities and this starts spreading you need to think a bit more about preparing for quarantine and general dealing with things. Standard, single family home suburban USA and rural USA is going to see much less impact.
Only if we use causal decision theory. If we use some variant of UDT, the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds. If we decline to use UDT here, we fail the practical test of UDT. In other words, we don’t cooperate in real world prisoners dilemma and this would undermine any our future hopes of usefulness of alternative decision theories.
In other words, we don’t cooperate in real world prisoners dilemma and this would undermine any our future hopes of usefulness of alternative decision theories.
I keep saying that I don’t know how to apply UDT to humans, especially to human cooperation. The “hope” for UDT was originally to solve anthropic reasoning and then later as a theoretical foundation for a safe AI decision procedure. Despite my repeated disclaimers, people seem really tempted to use it in real life, in very hand-wavy ways, which I feel obligated to disendorse.
I feel about partial correlation the way I used to feel about the categorical imperative in general; I don’t think our formalisations discuss it well at all. However. I know that the CDT way is wrong and I need a name for whatever the better way is supposed to be. What would you recommend. “Newcomblike reasoning”?
If we use some variant of UDT, the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds.
As I understand UDT, this isn’t right. UDT 1.1 chooses an input-output mapping that maximizes expected utility. Even assuming that all people who read LW run UDT 1.1, this choice still only determines the input-output behavior of a couple of programs (humans). The outputs of programs that don’t depend on our outputs because those programs aren’t running UDT are held constant. Therefore, if you formalized this problem, UDT’s output could be “stockpile food” even if [every human doing that] would lead to a disaster.
I think “pretend as if everyone runs UDT” was neither intentioned by Wei Dei nor is it a good idea. Differently put, UDT agents don’t cooperate in a one-shot prisoner’s dilemma if they play vs. CDT agents.
Also: if a couple of people stockpile food, but most people don’t, that seems like a preferable outcome to everyone doing nothing (provided stockpiling food is worth doing). It means some get to prepare, and the food market isn’t significantly affected. So this particular situation actually doesn’t seem to be isomorphic to the prisoner’s dilemma (if modeled via game theory).
I agree with avturchin, it’s an appropriate thought to be having. UDT-like reasoning is actually fairly common in populations that have not been tainted with CDT rationality (IE, normal people) (usually it is written off by cdt rationalists as moralising or collectivism). This line of thinking doesn’t require exact equivalence, the fact that there are many other people telling many other communities to prep is enough that all of those communities should consider the aggregate effects of that reasoning process. They are all capable of saying “what if everyone else did this as well? Wouldn’t it be bad? Should we really do it?”
They are all capable of saying “what if everyone else did this as well? Wouldn’t it be bad? Should we really do it?”
This doesn’t seem very similar to actual UDT reasoning though. It seems like a perfectly consistent outcome if “normal people” reason like this and conclude that they should refrain from hoarding food, and UDT agents do hoard food because they calculate a low logical correlation between themselves and “normal people”.
I think that cooperating only with those who are provably UDT-agents would make the whole UDT-idea weaker. However, in our case people don’t need to know the word “UDT” to understand that by buying food they are limiting other’s chances to buy it.
I don’t think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner’s dilemma with a CDT agent and both parties know each other’s algorithm/code: they both defect.
If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person’s well-being as much as yours, then you don’t have a prisoner’s dilemma because cooperate/defect is a better outcome for you than defect/defect.
by buying food they are limiting other’s chances to buy it.
But they’re only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.
What worries me here is that while playing, say, prisoner dilemma, an agent needs to perform an act of communication with another prisoner to learn her decision theory, which kills all the problem: if we can communicate, we can have some coordination strategy. In one shot prisoner’s dilemma we don’t know if the other side UDT or CDT agent, and other side also don’t know this about us. So the both are using similar lines of reasoning trying to guess if other agent is CDT or UDT. This similar reasoning itself could be a subject of UDT on meta-level, as we both would win more, if we assume that the other agent is UDT-agent.
the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds.
Luckily, the world we live in is not the least convenient possible one: The relevant mind-similarity is not the planning around hoarding food, it is planning based on UDT-type concerns. E.g., you should reason as if you have causal power over all minds that think “I’ll use a mixed strategy, and hoard food IFF my RNG comes up below .05.” (substituting whatever fraction would not cause a significant market disruption).
Since these minds comprise an insignificant portion of consumers, UDT shrugs and says “go ahead and hoard, I guess.”
That may be true, but it is not a product of the general public not knowing UDT. A large number of people don’t think or act in a CDT way either, and a lot of people that don’t care for decision theory follow the categorical imperative.
The advice of this post seems to be advice on the margin (i.e., assuming everything else is held constant), which seems reasonable given that this one post won’t change collective behavior by much.
So the question isn’t “what happens if everyone stockpiles food?” but rather, “do we expect enough people to stockpile food that stockpiling more food will lead to bad consequences?”. I don’t know the answer to that one.
FYI, ‘advise’ is the verb, and ‘advice’ is the noun.
(Also, advice is a mass noun, so you’d say “piece of advice” or just “advice” rather than “an advice”.)
Thanks; fixed & will try to remember.
The first question for me is are people starving in Wuhan due to the outbreak?
If not then stockpiling food seems a poor choice at the time.
Some of the other advice might makes sense, meds for instance but again that hangs on the real risk to production, delivery and retail options.
I think the other thing to consider is where one is. Seems this group has a fairly wide geographic distribution so local conditions should inform.
Answer is no, as of now, though food situation is uncomfortable. (my wife has relatives there she’s in contact with). Trucks come to apartment complexes and people pick up.
I’m not sure the analogy translates well to US though. For better or worse us people are less organized. Also large % population live in suburbs where such deliveries are not feasible.
OTOH we have an excellent general delivery system in Amazon, UPS etc.
I’m slightly worried.
I’m suburban and at least one of the local grocers has a delivery—some of the others offer online order and then pickup from a locker or they will bring to the car.
I think the other thing is that in the suburban setting you already mitigate some of the risk because you simply don’t get as close to each other as is the case with urban living—I don’t get on the same elevator as everyone else on the floor or in the building generally. (Though the condo residential-retail-commercial model is starting to appear.)
I think if you live in any of the big US cities and this starts spreading you need to think a bit more about preparing for quarantine and general dealing with things. Standard, single family home suburban USA and rural USA is going to see much less impact.
Only if we use causal decision theory. If we use some variant of UDT, the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds. If we decline to use UDT here, we fail the practical test of UDT. In other words, we don’t cooperate in real world prisoners dilemma and this would undermine any our future hopes of usefulness of alternative decision theories.
I keep saying that I don’t know how to apply UDT to humans, especially to human cooperation. The “hope” for UDT was originally to solve anthropic reasoning and then later as a theoretical foundation for a safe AI decision procedure. Despite my repeated disclaimers, people seem really tempted to use it in real life, in very hand-wavy ways, which I feel obligated to disendorse.
How would UDT solve anthropic reasoning? Any Links?
You might find Stuart Armstrong’s paper Anthropic decision theory for self-locating beliefs helpful.
Thanks for the reference
I feel about partial correlation the way I used to feel about the categorical imperative in general; I don’t think our formalisations discuss it well at all. However. I know that the CDT way is wrong and I need a name for whatever the better way is supposed to be. What would you recommend. “Newcomblike reasoning”?
As I understand UDT, this isn’t right. UDT 1.1 chooses an input-output mapping that maximizes expected utility. Even assuming that all people who read LW run UDT 1.1, this choice still only determines the input-output behavior of a couple of programs (humans). The outputs of programs that don’t depend on our outputs because those programs aren’t running UDT are held constant. Therefore, if you formalized this problem, UDT’s output could be “stockpile food” even if [every human doing that] would lead to a disaster.
I think “pretend as if everyone runs UDT” was neither intentioned by Wei Dei nor is it a good idea.
Differently put, UDT agents don’t cooperate in a one-shot prisoner’s dilemma if they play vs. CDT agents.
Also: if a couple of people stockpile food, but most people don’t, that seems like a preferable outcome to everyone doing nothing (provided stockpiling food is worth doing). It means some get to prepare, and the food market isn’t significantly affected. So this particular situation actually doesn’t seem to be isomorphic to the prisoner’s dilemma (if modeled via game theory).
I agree with avturchin, it’s an appropriate thought to be having. UDT-like reasoning is actually fairly common in populations that have not been tainted with CDT rationality (IE, normal people) (usually it is written off by cdt rationalists as moralising or collectivism). This line of thinking doesn’t require exact equivalence, the fact that there are many other people telling many other communities to prep is enough that all of those communities should consider the aggregate effects of that reasoning process. They are all capable of saying “what if everyone else did this as well? Wouldn’t it be bad? Should we really do it?”
This doesn’t seem very similar to actual UDT reasoning though. It seems like a perfectly consistent outcome if “normal people” reason like this and conclude that they should refrain from hoarding food, and UDT agents do hoard food because they calculate a low logical correlation between themselves and “normal people”.
How do you calculate logical correlation? Do we know anything about how this would work under UDT? Does UDT not really discuss it, or is it bad at it?
I think that cooperating only with those who are provably UDT-agents would make the whole UDT-idea weaker. However, in our case people don’t need to know the word “UDT” to understand that by buying food they are limiting other’s chances to buy it.
I don’t think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner’s dilemma with a CDT agent and both parties know each other’s algorithm/code: they both defect.
If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person’s well-being as much as yours, then you don’t have a prisoner’s dilemma because cooperate/defect is a better outcome for you than defect/defect.
But they’re only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.
What worries me here is that while playing, say, prisoner dilemma, an agent needs to perform an act of communication with another prisoner to learn her decision theory, which kills all the problem: if we can communicate, we can have some coordination strategy. In one shot prisoner’s dilemma we don’t know if the other side UDT or CDT agent, and other side also don’t know this about us. So the both are using similar lines of reasoning trying to guess if other agent is CDT or UDT. This similar reasoning itself could be a subject of UDT on meta-level, as we both would win more, if we assume that the other agent is UDT-agent.
Luckily, the world we live in is not the least convenient possible one: The relevant mind-similarity is not the planning around hoarding food, it is planning based on UDT-type concerns. E.g., you should reason as if you have causal power over all minds that think “I’ll use a mixed strategy, and hoard food IFF my RNG comes up below .05.” (substituting whatever fraction would not cause a significant market disruption).
Since these minds comprise an insignificant portion of consumers, UDT shrugs and says “go ahead and hoard, I guess.”
That may be true, but it is not a product of the general public not knowing UDT. A large number of people don’t think or act in a CDT way either, and a lot of people that don’t care for decision theory follow the categorical imperative.