The land tax would be a compensation for the other people’s claim they’d have to give up, which is a perfectly fine trade.
OK, let’s do a thought experiment. On planet K, labor is required to keep the air temperature around 25C: let’s say, in the form of operating special machines. The process cannot be automated and if an insufficient number of machines is manned, the temperature starts to rise towards 200C. The phenomenon is global and it is not possible to use the machines to cool a specific area of the surface. Now, 10% of the population are mutants that can survive the high temperature. The temperature resistance mutation also triggers highly unusual dreams. This allows the mutants to know themselves as such but there is no way to determine that a given person is a mutant (except subjecting her to high temperature for a sufficient amount of time, which seems to constitute a violation of agency if done forcibly). Normal people (without the mutation) would be die if the cooling machines cease operation.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
Children don’t have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child’s agency increases, and that government has a right to take children away from parents that don’t raise them to agency.
Why does the government have this right? If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
It would be the moral decision, not necessarily the right decision.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all? Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people” (btw this would be something that holds for me too). Regarding the discussion above, it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people”
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
I’m concerned that you are mixing two different things.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
I wouldn’t call it “moral”, I’d just call it “bargaining”.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
but the distinction between the two scenarios is principal.
But we’re not bargaining. This works even if we never meet.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
It’s about what you tax, not what for.
If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all?
It is interesting precicely because it is not already covered by some other concept. In my original phrasing morality would be about determining that someone is a defector, while the right decision would be about whether or not defecting against the defector is the dominant strategy. Killing one guy to save millions is the right decision because I can safely assume that no one will defect against me in return. Killing one to save five is not so clear cut. In that case people might kill me in order to not be killed by me.
it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
That would be the ‘necessary evil’ argument. However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
But then the tax can exceed the actual value of the land, in which case the net value of the land becomes negative. This is troubling. Imagine for example that due to taxes increasing or your income decreasing you no longer have the means to pay for your land. But you can’t sell it either because it’s value is negative! So you have to pay someone to take it away, but you might not have enough money. Moreover, if the size of the tax is disconnected from the actual value of the land, your “moral” justification for the tax falls apart.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
OK, so you need to introduce new rules about interaction with “potential agents”.
It is interesting precisely because it is not already covered by some other concept...
I don’t object to the concept of “violating agency is bad”, I’m objecting to equating it with “morality” since this use of terminology is confusing. On the other hand, names are not a matter of great importance.
However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
Even if taxes can be raised consistently with your agency rule (assuming it receives a more precise formulation), it doesn’t follow it is the correct way to raise taxes since there are other considerations that have to be taken into account, which might be stronger.
OK, let’s do a thought experiment. On planet K, labor is required to keep the air temperature around 25C: let’s say, in the form of operating special machines. The process cannot be automated and if an insufficient number of machines is manned, the temperature starts to rise towards 200C. The phenomenon is global and it is not possible to use the machines to cool a specific area of the surface. Now, 10% of the population are mutants that can survive the high temperature. The temperature resistance mutation also triggers highly unusual dreams. This allows the mutants to know themselves as such but there is no way to determine that a given person is a mutant (except subjecting her to high temperature for a sufficient amount of time, which seems to constitute a violation of agency if done forcibly). Normal people (without the mutation) would be die if the cooling machines cease operation.
Is the government within their rights to collect tax from the entire population to keep the machines operating?
Why does the government have this right? If the children don’t have agency the parents cannot defect against them therefore the government has no right to defect against the parents.
If moral decision =/= right decision, how do you define “moral”? Why is it this concept interesting at all? Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people” (btw this would be something that holds for me too). Regarding the discussion above, it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
And that would be exactly my point.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.
It’s about what you tax, not what for.
The children have potential agency. I didn’t account for that in my original post but I consider it relevant.
It is interesting precicely because it is not already covered by some other concept. In my original phrasing morality would be about determining that someone is a defector, while the right decision would be about whether or not defecting against the defector is the dominant strategy. Killing one guy to save millions is the right decision because I can safely assume that no one will defect against me in return. Killing one to save five is not so clear cut. In that case people might kill me in order to not be killed by me.
That would be the ‘necessary evil’ argument. However since I believe taxes can be raised morally I don’t consider the evil that is current forms of taxation to be necessary.
But then the tax can exceed the actual value of the land, in which case the net value of the land becomes negative. This is troubling. Imagine for example that due to taxes increasing or your income decreasing you no longer have the means to pay for your land. But you can’t sell it either because it’s value is negative! So you have to pay someone to take it away, but you might not have enough money. Moreover, if the size of the tax is disconnected from the actual value of the land, your “moral” justification for the tax falls apart.
OK, so you need to introduce new rules about interaction with “potential agents”.
I don’t object to the concept of “violating agency is bad”, I’m objecting to equating it with “morality” since this use of terminology is confusing. On the other hand, names are not a matter of great importance.
Even if taxes can be raised consistently with your agency rule (assuming it receives a more precise formulation), it doesn’t follow it is the correct way to raise taxes since there are other considerations that have to be taken into account, which might be stronger.