By one logic because we prefer B to A then if we “acausalize” this we should still preserve this preference (because “the amount of copies granted” would seem to be even handed), so we would expect to prefer D to C. However in a system where all infinites are of equal size then C=D and we become ambivalent between the options.
We shouldn’t necessarily prefer D to C. Remember that one of the main things you can do to increase the moral value of the universe is to try to causally help other creatures so that other people who are in sufficiently similar circumstances to you you will also help, so you acausally make them help others. Suppose you instead have the option to instead causally help all of the agents that would have been acausally helped if you just causally help one agent. Then the AI shouldn’t prefer D to C, because the results are identical.
Here an adhoc analysis might choose a clear winner. Then consider the combined problem where you have all the options, peace, punch, dust and insult. Having 1 analysis that gets applied to all options equally will run into trouble. If the analysis is somehow “turn options into real probablities” then problem with infinidesimals are likely to crop up.
Could you explain how this would cause problems? If those are the options, it seems like a clear-but case of my ethical system recommending peace, unless there is some benefit to punching, insulting, or throwing sand you haven’t mentioned.
To see why, if you decide to throw sand, you’re decreasing the satisfaction of agents in situations of the form “Can get sand thrown at them from someone just like Slider”. This would in general decrease the moral value of the world, so my system wouldn’t recommend it. The same reasoning can show that the system wouldn’t recommend punching or insulting.
There could be the “modulo infinity” failure mode where peace and dust get the same number. “One class only” would fail to give numbers for one of the subproblems.
Interesting. Could you elaborate?
I’m not really clear the reason for you are worried about these different classes. Remember that any action you will do will, at least acausally, help a countably infinite number of agents. Similarly, I think all your actions will have some real-valued affect on the moral value of the universe. To see why, just note that as long as you help one agent, then the expected satisfaction of agents in situations of the form, “<description of the circumstances of the above agent> who can be helped by someone just like Slider”. This has finite complexity, and thus real and non-zero probability. And the moral value of the universe is capped at whatever the domain of the life satisfaction measure is, so you can’t have infinite increases to the moral value of the universe, either.
You can’t causally help people without also acausally helping in the same go. Your acausal “influence” forces people matching your description to act the same. Even if it is possible to consider the directly helped and the undirectly helped to be the same they could also be different. In order to be fair we should also extend this to C. What if the person helped by all the acausal copies are in fact the same person? (If there is a proof it can’t be why doesn’t that apply when the patient group is large?)
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route. Suppose I help one person and then there is either a finite or infinite amount of people in my world. Finite impact over finite people leads to a real and finite kick. Finite impact over infinite people leads to a infinidesimal kick. Ah, but acausal copies of the finites! Yeah, but what about the acausal copies of the infinites? When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
One way to slither out of this is to claim that world-predescription-expansion needs to be finite, that there are only a finite configuration of stars until they start to repeat. Then we can drop “directly infinite” worlds and all infinity is because of acausality. So there is no such thing as directly helping infinite amount of people.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate. Fine, we can renormalise the the aggregate to be 1 with a division but that will mean that single agent weights on that average is going to be infinidesimal (and thus not real). If we acausalise then we should do so both for the numerator and the denominator. If we don’t acausalise the denominator then we should still acausalise the nomerator even if we have finite patients (but then we end up with more than finite kick). It is inconsistent if the nudges happen based on bad luck if we are “in the wrong weight class”.
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
Could you explain what insult is supposed to do? You didn’t say what in the previous comment. Does it causally hurt infinitely-many people?
Anyways, it seems to me that my system would not be ambivalent about whether you torture people a little or a lot. Let C be the class of finite descriptions of circumstances of agents in the universe that would get hurt a little or a lot if you decide to hurt them. The probability of an agent ending up in class C is non-zero. But if you decide to torture them a lot their expected life-satisfaction would be much lower than if you decide to torture them a little. Thus, the total moral value of the universe would be lower if you decide to torture a lot rather than a little.
When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
I can’t say I’m following you here. Specifically, how do you consider, “help all the stars” and “all the stars as they could have been” to be different? I thought, “help” meant, “make it better than it otherwise could have been”. I’m also not sure what counts as acausal shadows. I, alas, couldn’t find this phrase used anywhere else online.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate.
Remember that my ethical system doesn’t aggregate anything across all agents in the universe. Instead, it merely considers finite descriptions of situations an agent could be in the universe, and then aggregates the expected value of satisfaction in these situations, weighted by probability conditioning only on being in this universe.
There’s no way for this to be infinite. The probabilities of all the situations sum to 1 (they are assumed to be disjoint), and the measure of life satisfaction was said to be bounded.
And remember, my system doesn’t first find your causal impact on moral value of the universe and then somehow use this to find the acausal impact. Because in our universe, I think the causal impact will always be zero. Instead, just directly worry about acausal impacts. And your acausal impact on the moral value of the universe will always be finite and non-infinitesimal.
Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Remember that if you decide to take a certain action, that implies that other agents who are sufficiently similar to you and in sufficiently similar circumstances also take that action. Thus, you can acausally have non-infinitesimal impact on the satisfaction of agents in situations of the form, “An agent in a world with someone just like Slider who is also in very similar circumstances to Slider’s.” The above scenario is of finite complexity and isn’t ruled out by evidence. Thus, the probability of an agent ending up in such a situation, conditioning only only on being some agent in this universe, is nonzero [and non-infinitesimal].
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata...
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.
We shouldn’t necessarily prefer D to C. Remember that one of the main things you can do to increase the moral value of the universe is to try to causally help other creatures so that other people who are in sufficiently similar circumstances to you you will also help, so you acausally make them help others. Suppose you instead have the option to instead causally help all of the agents that would have been acausally helped if you just causally help one agent. Then the AI shouldn’t prefer D to C, because the results are identical.
Could you explain how this would cause problems? If those are the options, it seems like a clear-but case of my ethical system recommending peace, unless there is some benefit to punching, insulting, or throwing sand you haven’t mentioned.
To see why, if you decide to throw sand, you’re decreasing the satisfaction of agents in situations of the form “Can get sand thrown at them from someone just like Slider”. This would in general decrease the moral value of the world, so my system wouldn’t recommend it. The same reasoning can show that the system wouldn’t recommend punching or insulting.
Interesting. Could you elaborate?
I’m not really clear the reason for you are worried about these different classes. Remember that any action you will do will, at least acausally, help a countably infinite number of agents. Similarly, I think all your actions will have some real-valued affect on the moral value of the universe. To see why, just note that as long as you help one agent, then the expected satisfaction of agents in situations of the form, “<description of the circumstances of the above agent> who can be helped by someone just like Slider”. This has finite complexity, and thus real and non-zero probability. And the moral value of the universe is capped at whatever the domain of the life satisfaction measure is, so you can’t have infinite increases to the moral value of the universe, either.
You can’t causally help people without also acausally helping in the same go. Your acausal “influence” forces people matching your description to act the same. Even if it is possible to consider the directly helped and the undirectly helped to be the same they could also be different. In order to be fair we should also extend this to C. What if the person helped by all the acausal copies are in fact the same person? (If there is a proof it can’t be why doesn’t that apply when the patient group is large?)
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route. Suppose I help one person and then there is either a finite or infinite amount of people in my world. Finite impact over finite people leads to a real and finite kick. Finite impact over infinite people leads to a infinidesimal kick. Ah, but acausal copies of the finites! Yeah, but what about the acausal copies of the infinites? When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
One way to slither out of this is to claim that world-predescription-expansion needs to be finite, that there are only a finite configuration of stars until they start to repeat. Then we can drop “directly infinite” worlds and all infinity is because of acausality. So there is no such thing as directly helping infinite amount of people.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate. Fine, we can renormalise the the aggregate to be 1 with a division but that will mean that single agent weights on that average is going to be infinidesimal (and thus not real). If we acausalise then we should do so both for the numerator and the denominator. If we don’t acausalise the denominator then we should still acausalise the nomerator even if we have finite patients (but then we end up with more than finite kick). It is inconsistent if the nudges happen based on bad luck if we are “in the wrong weight class”.
Could you explain what insult is supposed to do? You didn’t say what in the previous comment. Does it causally hurt infinitely-many people?
Anyways, it seems to me that my system would not be ambivalent about whether you torture people a little or a lot. Let C be the class of finite descriptions of circumstances of agents in the universe that would get hurt a little or a lot if you decide to hurt them. The probability of an agent ending up in class C is non-zero. But if you decide to torture them a lot their expected life-satisfaction would be much lower than if you decide to torture them a little. Thus, the total moral value of the universe would be lower if you decide to torture a lot rather than a little.
I can’t say I’m following you here. Specifically, how do you consider, “help all the stars” and “all the stars as they could have been” to be different? I thought, “help” meant, “make it better than it otherwise could have been”. I’m also not sure what counts as acausal shadows. I, alas, couldn’t find this phrase used anywhere else online.
Remember that my ethical system doesn’t aggregate anything across all agents in the universe. Instead, it merely considers finite descriptions of situations an agent could be in the universe, and then aggregates the expected value of satisfaction in these situations, weighted by probability conditioning only on being in this universe.
There’s no way for this to be infinite. The probabilities of all the situations sum to 1 (they are assumed to be disjoint), and the measure of life satisfaction was said to be bounded.
And remember, my system doesn’t first find your causal impact on moral value of the universe and then somehow use this to find the acausal impact. Because in our universe, I think the causal impact will always be zero. Instead, just directly worry about acausal impacts. And your acausal impact on the moral value of the universe will always be finite and non-infinitesimal.
Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Yes, insult is supposed to add to the injury.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
No, I’m still not sure what you mean by this.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.