are limited to using a decision theory that survived past social/biological Parfitian filters.
What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.
I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.
What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.
I thought I did: decision theories that give weight to SAMELs.
I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.
For the same reason one wouldn’t posit “liking Omega” as a good explanation for why someone would pay Omega in the Parfit’s Hitchhiker problem.
I thought I did: decision theories that give weight to SAMELs.
Sure, but this is borderline tautological—by definition, Parfitian filters will tend to filter out decision theories that assign zero weight to SAMELs, and a SAMEL is the sort of consideration that a decision theory must incorporate in order to survive Parfitian filters. You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind. “Non-zero” is a very, very broad category.
For the same reason one wouldn’t posit “liking Omega” as a good explanation for why someone would pay Omega in the Parfit’s Hitchhiker problem.
Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.
Sure, but this is borderline tautological … You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind.
There wouldn’t be much point to further constraining the set, considering that it’s only the subjunctive action that matters, not the reasoning that leads up to it. As I said on my blog, it doesn’t matter whether you would decide to pay because you:
feel honor-bound to do so;
feel so grateful to Omega that you think it deserves what it wanted from you;
believe you would be punished with eternal hellfire if you didn’t, and dislike hellfire;
like to transfer money to Omega-like beings, just for the heck of it;
or for any other reason.
So if I’m going to list all the theories that win on PH-like problems, it’s going to be a long list, as it includes (per the Drescher quote) everyone that behaves as if they recognized the SAMEL, including people who simply feel “grateful”.
To answer the question of “What did I say that’s non-tautological?”, it’s that a decision theory that is optimal in a self-interested sense will not merely look at the future consequences (not necessarily a redundant term), but will weight the acausal consequences on par with them, bypassing the task of having to single out each intuition and elevate it to a terminal value.
Edit: And, to show how this acausal weighting coincides with what we call morality, explaining why we have the category in the first place.
Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.
And as I said to Perplexed, natural selection was our Omega.
And as I said to Perplexed, natural selection was our Omega.
Did you? I’m sorry I missed it. Could you explain it?
I can see how NS might be thought of as a powerful psychic capable of discerning our true natures. And I can see, maybe, how NS cannot itself easily be modeled as a rational decision maker making decisions to maximize its own utility. Hence we must treat it as as a fairly arbitrary agent with a known decision algorithm. Modeling NS as a variant of Omega is something I had never thought of doing before. Is there anything already written down justifying this viewpoint?
This was the point I made in the second section of the article.
I read the article again, but didn’t see the point being made clearly at all.
Nevertheless, the point has been made right here, and I think it is an important point. I would urge anyone promoting decision theories of the UDT/TDT family to research the theory of kin selection in biological evolution—particularly the justification of “Hamilton’s rule”. Also, the difference between the biological ESS version of game theory and the usual “rational agent” approach.
I think that it should be possible to cleanly merge these Omega-inspired ideas into standard utility maximization theory by using a theoretical construct something like Hamilton’s “inclusive fitness”. “Inclusive utility”. I like the sound of that.
I read the article again, but didn’t see the point being made clearly at all.
I’m referring to the point I made here:
Sustainable self-replication as a Parfitian filter
Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: “Why do parents care for their children, usually at great cost to themselves?” The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things. Parents that did not have sufficient “built-in desire” to care for their children would be weeded out; what’s left is (genes that construct) minds that do have such a desire.
This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were “attached” to a decision theory that outputs actions sufficiently more favorable toward one’s children than one’s self.
Do you think that did not make clear the similarity between Omega and natural selection?
Do you think that did not make clear the similarity between Omega and natural selection?
No, it did not. I see it now, but I did not see it at first. I think I understand why it was initially obvious to you but not to me. It all goes back to a famous 1964 paper in evolutionary theory by William Hamilton. His theory of kin selection.
Since Darwin, it has been taken as axiomatic that parents will care for children. Of course, they do, says the Darwinian. Children are the only thing that does matter. All organisms are mortal, their only hope for genetic immortality is by way of descendents.
The only reason the rabbit runs away from the fox is so it can have more children, sometime in the near future. So, as a Darwinian, I saw your attempt to justify parental care using Omega as just weird. We don’t need to explain that. It is just axiomatic.
Then along came Hamilton with the idea that taking care of descendants (children and grandchildren) is not the whole story. Organisms are also selected to take care of siblings, and cousins and nephews and nieces. That insight definitely was not part of standard received Darwinism. But Hamilton had the math to prove it. And, as Trivers and others pointed out, even the traditional activities of taking care of direct descendants should probably be treated as just one simple case of Hamilton’s more general theory.
Ok, that is the background. I hope it is now clear if I say that the reason I did not see parental care as an example of a “Parfitian filter” is exactly like the reason traditional Darwinists did not at first see parental care as just one more example supporting Hamilton’s theory. They didn’t get that point because they already understood parental care without having to consider this new idea.
Okay, thanks for explaining that. I didn’t intend for that explanation of parental behavior to be novel (I even said it was uncontroversial), but rather, to show it as a realistic example of a Parfitian filter, which motivates the application to morality. In any case, I added a note explicitly showing the parallel between Omega and natural selection.
For the same reason one wouldn’t posit “liking Omega” as a good explanation for why someone would pay Omega in the Parfit’s Hitchhiker problem.
Could you expand on this? I’m pretty sure that “liking the driver” was not part of my “solution”.
I suppose my “honor module” could be called “irrational” …. but, it is something that the hitchhiker is endowed with that he cannot control, no more than he can control his sex drive. And it is evolutionarily a useful thing to have. Or rather, a useful thing to have people believe you have. And people will tend to believe that, even total strangers, if natural selection has made it an observable feature of human nature.
What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.
I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.
I thought I did: decision theories that give weight to SAMELs.
For the same reason one wouldn’t posit “liking Omega” as a good explanation for why someone would pay Omega in the Parfit’s Hitchhiker problem.
Sure, but this is borderline tautological—by definition, Parfitian filters will tend to filter out decision theories that assign zero weight to SAMELs, and a SAMEL is the sort of consideration that a decision theory must incorporate in order to survive Parfitian filters. You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind. “Non-zero” is a very, very broad category.
Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.
There wouldn’t be much point to further constraining the set, considering that it’s only the subjunctive action that matters, not the reasoning that leads up to it. As I said on my blog, it doesn’t matter whether you would decide to pay because you:
feel honor-bound to do so;
feel so grateful to Omega that you think it deserves what it wanted from you;
believe you would be punished with eternal hellfire if you didn’t, and dislike hellfire;
like to transfer money to Omega-like beings, just for the heck of it;
or for any other reason.
So if I’m going to list all the theories that win on PH-like problems, it’s going to be a long list, as it includes (per the Drescher quote) everyone that behaves as if they recognized the SAMEL, including people who simply feel “grateful”.
To answer the question of “What did I say that’s non-tautological?”, it’s that a decision theory that is optimal in a self-interested sense will not merely look at the future consequences (not necessarily a redundant term), but will weight the acausal consequences on par with them, bypassing the task of having to single out each intuition and elevate it to a terminal value.
Edit: And, to show how this acausal weighting coincides with what we call morality, explaining why we have the category in the first place.
And as I said to Perplexed, natural selection was our Omega.
Did you? I’m sorry I missed it. Could you explain it?
I can see how NS might be thought of as a powerful psychic capable of discerning our true natures. And I can see, maybe, how NS cannot itself easily be modeled as a rational decision maker making decisions to maximize its own utility. Hence we must treat it as as a fairly arbitrary agent with a known decision algorithm. Modeling NS as a variant of Omega is something I had never thought of doing before. Is there anything already written down justifying this viewpoint?
This was the point I made in the second section of the article.
I read the article again, but didn’t see the point being made clearly at all.
Nevertheless, the point has been made right here, and I think it is an important point. I would urge anyone promoting decision theories of the UDT/TDT family to research the theory of kin selection in biological evolution—particularly the justification of “Hamilton’s rule”. Also, the difference between the biological ESS version of game theory and the usual “rational agent” approach.
I think that it should be possible to cleanly merge these Omega-inspired ideas into standard utility maximization theory by using a theoretical construct something like Hamilton’s “inclusive fitness”. “Inclusive utility”. I like the sound of that.
I’m referring to the point I made here:
Do you think that did not make clear the similarity between Omega and natural selection?
No, it did not. I see it now, but I did not see it at first. I think I understand why it was initially obvious to you but not to me. It all goes back to a famous 1964 paper in evolutionary theory by William Hamilton. His theory of kin selection.
Since Darwin, it has been taken as axiomatic that parents will care for children. Of course, they do, says the Darwinian. Children are the only thing that does matter. All organisms are mortal, their only hope for genetic immortality is by way of descendents. The only reason the rabbit runs away from the fox is so it can have more children, sometime in the near future. So, as a Darwinian, I saw your attempt to justify parental care using Omega as just weird. We don’t need to explain that. It is just axiomatic.
Then along came Hamilton with the idea that taking care of descendants (children and grandchildren) is not the whole story. Organisms are also selected to take care of siblings, and cousins and nephews and nieces. That insight definitely was not part of standard received Darwinism. But Hamilton had the math to prove it. And, as Trivers and others pointed out, even the traditional activities of taking care of direct descendants should probably be treated as just one simple case of Hamilton’s more general theory.
Ok, that is the background. I hope it is now clear if I say that the reason I did not see parental care as an example of a “Parfitian filter” is exactly like the reason traditional Darwinists did not at first see parental care as just one more example supporting Hamilton’s theory. They didn’t get that point because they already understood parental care without having to consider this new idea.
Okay, thanks for explaining that. I didn’t intend for that explanation of parental behavior to be novel (I even said it was uncontroversial), but rather, to show it as a realistic example of a Parfitian filter, which motivates the application to morality. In any case, I added a note explicitly showing the parallel between Omega and natural selection.
Could you expand on this? I’m pretty sure that “liking the driver” was not part of my “solution”.
I suppose my “honor module” could be called “irrational” …. but, it is something that the hitchhiker is endowed with that he cannot control, no more than he can control his sex drive. And it is evolutionarily a useful thing to have. Or rather, a useful thing to have people believe you have. And people will tend to believe that, even total strangers, if natural selection has made it an observable feature of human nature.