Similarly, the purported reductios of consequentialism rely on the following two tricks: they implicitly assume that consequentialists must care only about the immediate consequences of an action, or they implicitly assume that consequentialists must be causal decision theorists.
“TDT + consequentialism” seems like it isn’t a consequentialist theory any more—it’s taking into account things that are not consequences. (“Acausal consequence” seems like an oxymoron, and if not, I would like to know what sort of ‘acausal consequences’ a TDT-consequentialist should consider.) This feels much more like the Kantian categorical imperative, all dressed up with decision theory.
As a sidenote to my previous comment: I do wonder to what extent deontological concepts of “Acting With Honor” and “Just Vengeance” evolved in human societies as an effective approximation of a TDT, which encouraged both initial cooperation and punishment of defections by making societal members into the sort of being that would instinctively cooperate and punish those people with accurate enough models of themselves.
On the other hand, attitudes of vengeance towards non-intelligent beings (beings that can’t model you) are seen as much more… insane. Captain Ahab is perceived as a more insane figure than the Count of Monte Cristo; though both are driven by vengeance, because the former seeks vengeance against a whale, not against people.
Though mind you, even against animals, vengeance is rather useful; because even animals can model humans to some extent. The wolves in The Jungle Book learned to “seven times never kill Man”, after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
The wolves in The Jungle Book learned to “seven times never kill Man”, after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
Using this to support your statement lowered my credence therein.
Though mind you, even against animals, vengeance is rather useful; because even animals can model humans to some extent. The wolves in The Jungle Book learned to “seven times never kill Man”, after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
Beware fictional evidence. I suspect that wolves might be smart enough in individual cases to recognize humans are a big nasty threat they don’t want to mess with. But that makes sense in a context without any understanding of vengeance.
Well, they could EVOLVE that reticence for perfectly good reasons. I’ll dare in this context to suggest that evolution IS intelligence. Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does, only the simulations are real, and the best chess moves “selected?”
I’ll dare in this context to suggest that evolution IS intelligence.
That’s a waste of a word. Call evolution an optimisation process (which is only a slight stretch). Then you can use the word ‘intelligence’ to refer to what you refer to as ‘meta-intelligence’. Keeping distinct concepts distinct while also acknowledging the relationship tends to be the best policy.
Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does,
No, it really isn’t and using that model encourages bad predictions about the evolution of a species. Species don’t ‘forecast and select’. Species evolve to extinction with as much enthusiasm as they evolve to new heights of adaptive performance. Saying that evolution ‘learns from the past’ would be slightly less of an error but I wouldn’t even go there.
Hmm, I agree, except for the last part.
Blindly trying (what genetic mixing & mutating does) it like poorly guided forecasting. (Good simulation engineers or chess players somehow “see” the space of likely moves, bad ones just try a lot) and the species doesn’t select, but the environment does.
I’m not completely sure what you are trying to say. I agree they could potentially evolve such an attitude if the selection pressure was high enough.
But evolution doesn’t work like a chess player. Evolution does what works in the short term, blindly having the most successful alleles push forward to the next generation. If there were a chess analogy, evolution would be like a massive chess board with millions of players and each player making whatever move looks best at a quick glance, and then there are a few hundred thousand players who just move randomly.
For an example of acausal consequences: getting a million dollars as a result of one-boxing in Newcomb’s. Or getting a hundred dollars as a result of two-boxing.
I would argue that TDT (or UDT) is actually a more consequentialist theory than CDT. The qualitative difference between consequentialism and deontology is that for consequentialists the most important thing is a good outcome, whereas deontology means following the correct rules, regardless of the outcome. But it’s casual decision theorists, after all, that continue to adhere to their decision ritual that two-boxes, and loses, in the face of all the empirical evidence (well, hypothetical empirical evidence, anyway :p) that it’s the wrong thing to do!
TDT basically takes into consideration the consequences of itself—not just each particular action it endorses, but the consequences of you following a specific logic towards that action, and the consequences of other people knowing that you would follow such a logic.
It’s a consequentialist theory because it seeks to maximize the utility of consequent states of the world—it doesn’t have deontological instructions like “cooperate because it’s the nice thing to do”—it says things like “cooperate if and only if the other guy would be able to predict and punish your non-cooperation, because that leads to an optimal-utility state for you”
All that having been said, I think some people are misusing TDT when they say people would know your non-cooperation. Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.
“TDT + consequentialism” seems like it isn’t a consequentialist theory any more—it’s taking into account things that are not consequences. (“Acausal consequence” seems like an oxymoron, and if not, I would like to know what sort of ‘acausal consequences’ a TDT-consequentialist should consider.) This feels much more like the Kantian categorical imperative, all dressed up with decision theory.
As a sidenote to my previous comment: I do wonder to what extent deontological concepts of “Acting With Honor” and “Just Vengeance” evolved in human societies as an effective approximation of a TDT, which encouraged both initial cooperation and punishment of defections by making societal members into the sort of being that would instinctively cooperate and punish those people with accurate enough models of themselves.
On the other hand, attitudes of vengeance towards non-intelligent beings (beings that can’t model you) are seen as much more… insane. Captain Ahab is perceived as a more insane figure than the Count of Monte Cristo; though both are driven by vengeance, because the former seeks vengeance against a whale, not against people.
Though mind you, even against animals, vengeance is rather useful; because even animals can model humans to some extent. The wolves in The Jungle Book learned to “seven times never kill Man”, after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
Using this to support your statement lowered my credence therein.
Upvoted for reminding me that some evidence are so weak, that to offer them does actually count as evidence against. :-)
Beware fictional evidence. I suspect that wolves might be smart enough in individual cases to recognize humans are a big nasty threat they don’t want to mess with. But that makes sense in a context without any understanding of vengeance.
Uh, yes, tongue-in-cheek about what poetry-using wolves in a fiction book “learned” from human vengeance.
Still qualifies slightly as evidence in telling us how humans model animals to model humans.
Well, they could EVOLVE that reticence for perfectly good reasons. I’ll dare in this context to suggest that evolution IS intelligence. Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does, only the simulations are real, and the best chess moves “selected?”
a species thereby exhibits meta-intelligence, no?
That’s a waste of a word. Call evolution an optimisation process (which is only a slight stretch). Then you can use the word ‘intelligence’ to refer to what you refer to as ‘meta-intelligence’. Keeping distinct concepts distinct while also acknowledging the relationship tends to be the best policy.
No, it really isn’t and using that model encourages bad predictions about the evolution of a species. Species don’t ‘forecast and select’. Species evolve to extinction with as much enthusiasm as they evolve to new heights of adaptive performance. Saying that evolution ‘learns from the past’ would be slightly less of an error but I wouldn’t even go there.
Hmm, I agree, except for the last part. Blindly trying (what genetic mixing & mutating does) it like poorly guided forecasting. (Good simulation engineers or chess players somehow “see” the space of likely moves, bad ones just try a lot) and the species doesn’t select, but the environment does.
I need to go read “evolve to extinction.”
Thanks
I’m not completely sure what you are trying to say. I agree they could potentially evolve such an attitude if the selection pressure was high enough.
But evolution doesn’t work like a chess player. Evolution does what works in the short term, blindly having the most successful alleles push forward to the next generation. If there were a chess analogy, evolution would be like a massive chess board with millions of players and each player making whatever move looks best at a quick glance, and then there are a few hundred thousand players who just move randomly.
Good point.. Easy to imagine a lot of biologically good designs getting left unexpressed because the first move is less optimal.
For an example of acausal consequences: getting a million dollars as a result of one-boxing in Newcomb’s. Or getting a hundred dollars as a result of two-boxing.
I would argue that TDT (or UDT) is actually a more consequentialist theory than CDT. The qualitative difference between consequentialism and deontology is that for consequentialists the most important thing is a good outcome, whereas deontology means following the correct rules, regardless of the outcome. But it’s casual decision theorists, after all, that continue to adhere to their decision ritual that two-boxes, and loses, in the face of all the empirical evidence (well, hypothetical empirical evidence, anyway :p) that it’s the wrong thing to do!
TDT basically takes into consideration the consequences of itself—not just each particular action it endorses, but the consequences of you following a specific logic towards that action, and the consequences of other people knowing that you would follow such a logic.
It’s a consequentialist theory because it seeks to maximize the utility of consequent states of the world—it doesn’t have deontological instructions like “cooperate because it’s the nice thing to do”—it says things like “cooperate if and only if the other guy would be able to predict and punish your non-cooperation, because that leads to an optimal-utility state for you”
All that having been said, I think some people are misusing TDT when they say people would know your non-cooperation. Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
(DefectBot is sufficient to demostrate it’s not always a bad idea to defect. In other cases, it can be much more subtle.)
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.