Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Remember that if you decide to take a certain action, that implies that other agents who are sufficiently similar to you and in sufficiently similar circumstances also take that action. Thus, you can acausally have non-infinitesimal impact on the satisfaction of agents in situations of the form, “An agent in a world with someone just like Slider who is also in very similar circumstances to Slider’s.” The above scenario is of finite complexity and isn’t ruled out by evidence. Thus, the probability of an agent ending up in such a situation, conditioning only only on being some agent in this universe, is nonzero [and non-infinitesimal].
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata...
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.
Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Yes, insult is supposed to add to the injury.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
No, I’m still not sure what you mean by this.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.