So, if the government is forcing you to pay taxes it is infringing on your agency and is therefore evil?
The way it is mostly done today, probably yes. However, taxing use of natural resources or land ownership / stewardship could still be done morally. This is not an argument for anarchy.
So what? You make decisions in conditions of uncertainty. Use Bayesian expected utility.
It’s just something that bothers me about utilitarianism, not something I consider indefensible.
The pushing the fat man to death is the second part of the trolley problem where you need to do it to save the five.
So, is being neutral better, worse or incomparable with being good & bad?
That’s an open question. I just wanted to point out that in this case there would be a cognitive dissonance between one part of the brain telling us to defect while the other tells us to cooperate, and my argument is that we should be aware of this cognitive dissonance to make a grounded moral decision.
OK, so you value your own innocence more than you value the lives of other people. But to what extent? What if it’s 50 instead of 5? 5000? 5 million?
Your trying to push my concept of ethics back into the utilitarian frame while I was trying to free it from that. Of course there is a point where I value life more than my innocence and my brain would just act and rationalize it with the ‘At least I feel guilty’ self delusion. But that is exactly my point, even then it would still be morally wrong to kill one person. My more realistic version of the interaction game does not account for that kind of asymmetric payout and it might turn out that defecting against the defector in this situation is no longer the best strategy. I personally would not defect even against someone killing the one to safe the five other than pointing out the possible immorality and refusing to cooperate, staying neutral.
...taxing use of natural resources or land ownership / stewardship could still be done morally.
Why? How is that not a violation of agency?
Also, what about children? Is forbidding them from eating too much candy evil because it violates their agency?
Your trying to push my concept of ethics back into the utilitarian frame while I was trying to free it from that.
Well, either your ethics can be formulated as maximizing a utility function, in which I case I want to understand that utility function, or your ethics conflicts with the VNM axioms in which case I want to make the conflict explicit.
Of course there is a point where I value life more than my innocence and my brain would just act and rationalize it with the ‘At least I feel guilty’ self delusion. But that is exactly my point, even then it would still be morally wrong to kill one person.
I don’t get it. Are you saying not killing the one person is the right decision even if millions of lives are at stake?
My more realistic version of the interaction game does not account for that kind of asymmetric payout and it might turn out that defecting against the defector in this situation is no longer the best strategy.
How is it “more realistic” if it neglects to take asymmetry into account?
I personally would not defect even against someone killing the one to safe the five other than pointing out the possible immorality and refusing to cooperate, staying neutral.
OK, but are there stakes high enough for you to cooperate?
Land and natural resources are just there and not a product of your agency. If many people want to make use of them neither can as they will be at odds, so the natural state is that nobody can act using natural resources. If we prohibit their use we’re not limiting agency as there is none to begin with, but if all but one person agree to not use these resources that one person’s agency is being increased as he has now more options. The land tax would be a compensation for the other people’s claim they’d have to give up, which is a perfectly fine trade.
I think to make that argument sufficiently detailed would require a new top-level post or at least its own comment thread.
what about children?
Children don’t have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child’s agency increases, and that government has a right to take children away from parents that don’t raise them to agency.
either your ethics can be formulated as maximizing a utility function, [...]
I have a utility function because I value morality, but using that utility function to explain the morality that I value would be circular reasoning.
I don’t get it. Are you saying not killing the one person is the right decision even if millions of lives are at stake?
It would be the moral decision, not necessarily the right decision. I’m using morality to inform my utility function but I can still make a utility tradeoff. The whole point of agency ethics vs. value ethics is to separate the morality consideration from the utility consideration. Killing the one would as I put it make me a both bad and good person and people could still think that the good in this instance outweighs the bad. My point is that when we mash the two together into a single utility consideration we get wrong results like killing the organ donor because we neglect the underlying agency consideration.
How is it “more realistic” if it neglects to take asymmetry into account?
I meant ‘more realistic’ than the simple prisoners’ dilemma but it’s not realistic enough to show how defecting against a defector might not always be the best strategy with asymmetrical payoff.
OK, but are there stakes high enough for you to cooperate?
The land tax would be a compensation for the other people’s claim they’d have to give up, which is a perfectly fine trade.
OK, let’s do a thought experiment. On planet K, labor is required to keep the air temperature around 25C: let’s say, in the form of operating special machines. The process cannot be automated and if an insufficient number of machines is manned, the temperature starts to rise towards 200C. The phenomenon is global and it is not possible to use the machines to cool a specific area of the surface. Now, 10% of the population are mutants that can survive the high temperature. The temperature resistance mutation also triggers highly unusual dreams. This allows the mutants to know themselves as such but there is no way to determine that a given person is a mutant (except subjecting her to high temperature for a sufficient amount of time, which seems to constitute a violation of agency if done forcibly). Normal people (without the mutation) would be die if the cooling machines cease operation.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
Children don’t have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child’s agency increases, and that government has a right to take children away from parents that don’t raise them to agency.
Why does the government have this right? If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
It would be the moral decision, not necessarily the right decision.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all? Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people” (btw this would be something that holds for me too). Regarding the discussion above, it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people”
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
I’m concerned that you are mixing two different things.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
I wouldn’t call it “moral”, I’d just call it “bargaining”.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
but the distinction between the two scenarios is principal.
But we’re not bargaining. This works even if we never meet.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
It’s about what you tax, not what for.
If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all?
It is interesting precicely because it is not already covered by some other concept. In my original phrasing morality would be about determining that someone is a defector, while the right decision would be about whether or not defecting against the defector is the dominant strategy. Killing one guy to save millions is the right decision because I can safely assume that no one will defect against me in return. Killing one to save five is not so clear cut. In that case people might kill me in order to not be killed by me.
it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
That would be the ‘necessary evil’ argument. However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
But then the tax can exceed the actual value of the land, in which case the net value of the land becomes negative. This is troubling. Imagine for example that due to taxes increasing or your income decreasing you no longer have the means to pay for your land. But you can’t sell it either because it’s value is negative! So you have to pay someone to take it away, but you might not have enough money. Moreover, if the size of the tax is disconnected from the actual value of the land, your “moral” justification for the tax falls apart.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
OK, so you need to introduce new rules about interaction with “potential agents”.
It is interesting precisely because it is not already covered by some other concept...
I don’t object to the concept of “violating agency is bad”, I’m objecting to equating it with “morality” since this use of terminology is confusing. On the other hand, names are not a matter of great importance.
However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
Even if taxes can be raised consistently with your agency rule (assuming it receives a more precise formulation), it doesn’t follow it is the correct way to raise taxes since there are other considerations that have to be taken into account, which might be stronger.
The way it is mostly done today, probably yes. However, taxing use of natural resources or land ownership / stewardship could still be done morally. This is not an argument for anarchy.
It’s just something that bothers me about utilitarianism, not something I consider indefensible.
The pushing the fat man to death is the second part of the trolley problem where you need to do it to save the five.
That’s an open question. I just wanted to point out that in this case there would be a cognitive dissonance between one part of the brain telling us to defect while the other tells us to cooperate, and my argument is that we should be aware of this cognitive dissonance to make a grounded moral decision.
Your trying to push my concept of ethics back into the utilitarian frame while I was trying to free it from that. Of course there is a point where I value life more than my innocence and my brain would just act and rationalize it with the ‘At least I feel guilty’ self delusion. But that is exactly my point, even then it would still be morally wrong to kill one person. My more realistic version of the interaction game does not account for that kind of asymmetric payout and it might turn out that defecting against the defector in this situation is no longer the best strategy. I personally would not defect even against someone killing the one to safe the five other than pointing out the possible immorality and refusing to cooperate, staying neutral.
Why? How is that not a violation of agency?
Also, what about children? Is forbidding them from eating too much candy evil because it violates their agency?
Well, either your ethics can be formulated as maximizing a utility function, in which I case I want to understand that utility function, or your ethics conflicts with the VNM axioms in which case I want to make the conflict explicit.
I don’t get it. Are you saying not killing the one person is the right decision even if millions of lives are at stake?
How is it “more realistic” if it neglects to take asymmetry into account?
OK, but are there stakes high enough for you to cooperate?
Land and natural resources are just there and not a product of your agency. If many people want to make use of them neither can as they will be at odds, so the natural state is that nobody can act using natural resources. If we prohibit their use we’re not limiting agency as there is none to begin with, but if all but one person agree to not use these resources that one person’s agency is being increased as he has now more options. The land tax would be a compensation for the other people’s claim they’d have to give up, which is a perfectly fine trade.
I think to make that argument sufficiently detailed would require a new top-level post or at least its own comment thread.
Children don’t have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child’s agency increases, and that government has a right to take children away from parents that don’t raise them to agency.
I have a utility function because I value morality, but using that utility function to explain the morality that I value would be circular reasoning.
It would be the moral decision, not necessarily the right decision. I’m using morality to inform my utility function but I can still make a utility tradeoff. The whole point of agency ethics vs. value ethics is to separate the morality consideration from the utility consideration. Killing the one would as I put it make me a both bad and good person and people could still think that the good in this instance outweighs the bad. My point is that when we mash the two together into a single utility consideration we get wrong results like killing the organ donor because we neglect the underlying agency consideration.
I meant ‘more realistic’ than the simple prisoners’ dilemma but it’s not realistic enough to show how defecting against a defector might not always be the best strategy with asymmetrical payoff.
I don’t know what you mean.
OK, let’s do a thought experiment. On planet K, labor is required to keep the air temperature around 25C: let’s say, in the form of operating special machines. The process cannot be automated and if an insufficient number of machines is manned, the temperature starts to rise towards 200C. The phenomenon is global and it is not possible to use the machines to cool a specific area of the surface. Now, 10% of the population are mutants that can survive the high temperature. The temperature resistance mutation also triggers highly unusual dreams. This allows the mutants to know themselves as such but there is no way to determine that a given person is a mutant (except subjecting her to high temperature for a sufficient amount of time, which seems to constitute a violation of agency if done forcibly). Normal people (without the mutation) would be die if the cooling machines cease operation.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
Why does the government have this right? If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all? Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people” (btw this would be something that holds for me too). Regarding the discussion above, it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
And that would be exactly my point.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.
It’s about what you tax, not what for.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
It is interesting precicely because it is not already covered by some other concept. In my original phrasing morality would be about determining that someone is a defector, while the right decision would be about whether or not defecting against the defector is the dominant strategy. Killing one guy to save millions is the right decision because I can safely assume that no one will defect against me in return. Killing one to save five is not so clear cut. In that case people might kill me in order to not be killed by me.
That would be the ‘necessary evil’ argument. However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
But then the tax can exceed the actual value of the land, in which case the net value of the land becomes negative. This is troubling. Imagine for example that due to taxes increasing or your income decreasing you no longer have the means to pay for your land. But you can’t sell it either because it’s value is negative! So you have to pay someone to take it away, but you might not have enough money. Moreover, if the size of the tax is disconnected from the actual value of the land, your “moral” justification for the tax falls apart.
OK, so you need to introduce new rules about interaction with “potential agents”.
I don’t object to the concept of “violating agency is bad”, I’m objecting to equating it with “morality” since this use of terminology is confusing. On the other hand, names are not a matter of great importance.
Even if taxes can be raised consistently with your agency rule (assuming it receives a more precise formulation), it doesn’t follow it is the correct way to raise taxes since there are other considerations that have to be taken into account, which might be stronger.