Post is pretty long winded,a bit wall fo texty in a lot of text which seems like fixed amount of content while being very claimy and less showy about the properties.
My suspicion is that the acausal impact ends up being infinidesimal anyway. Even if one would get finite probability impact for probabilties concerning a infinite universe for claims like “should I help this one person” then claims like “should I help these infinite persons” would still have an infinity class jump between the statements (even if both need to have an infinite kick into the universe to make a dent there is an additional level to one of these statements and not all infinities are equal).
I am going to anticipate that your scheme will try to rule out statements like “should I help these infinite persons” for a reason like “its not of finite complexity”. I am not convinced that finite complexity descriptions are good guarantees that the described condition makes for a finite proportion of possibility space. I think “Getting a perfect bullseye” is a description of finite complexity but it describes and outcome of (real) 0 probabaility. Being positive is of no guarantee of finitude, infinidesimal chances would spell trouble for the theory. And if statements like “Slider or (near equivalent) gets a perfect bullseye” are disallowed for not being finitely groundable then most references to infinite objects are ruled out anyway. Its not exactly an infinite ethic if it is not allowed to refer to infinite things.
I am also slightly worried that “description cuts” will allow “doubling the ball” kind of events where total probability doesn’t get preserved. That phenomenon gets around the theorethical problems by designating some sets non-measurable. But then being a a set doesn’t mean its measurable. I am worried that “descriptions always have a usable probablity” is too lax and will bleed from the edges like a naive assumption that all sets are measurable would.
I feel at a loss for ebign able to spell out my worries. It is mainly about being murky making it possible to hide undefidedness. As an analog one could think of people trying to formulate calculus. There is a way of thinking about it where you make tiny infinidesimal triangles and measure their properties. In order for the side length to be “sane” the sides fo the triangle need both be “small”. If you had a triangle that was finite in length in one side and infinidesimal in one side then the angle of the remaining side is likely to be something “wild”. If you “properly take the limits” then you can essentially forget that you are in infinidesimal realm (or one that can be made analogous to one) but checking for that “properness” forgetfulness doesn’t help with.
Post is pretty long winded,a bit wall fo texty in a lot of text which seems like fixed amount of content while being very claimy and less showy about the properties.
Yeah, I see what you mean. I have a hard time balancing between being succinct and providing sufficient support and detail. It actually used to be shorter, but I lengthened it to address concerns brought up a review.
My suspicion is that the acausal impact ends up being infinidesimal anyway. Even if one would get finite probability impact for probabilties concerning a infinite universe for claims like “should I help this one person” then claims like “should I help these infinite persons” would still have an infinity class jump between the statements (even if both need to have an infinite kick into the universe to make a dent there is an additional level to one of these statements and not all infinities are equal).
Could you elaborate what you mean by a class jump?
Remember that if you ask, “should I help this one person”, that is another way of saying, “should I (acausally) help this infinite class of people in similar circumstances”. And I think in general the cardinality of this infinity would be the same as the cardinality of people helped by considering “should I help these infinitely-many persons”
Most likely the number of people in this universe is countably infinite, and all situations are repeated infinitely-many times. Thus, asking, “should I help this one person” would acausally help ℵ0 people, and so would causally helping the infinitely-many people.
I am going to anticipate that your scheme will try to rule out statements like “should I help these infinite persons” for a reason like “its not of finite complexity”. I am not convinced that finite complexity descriptions are good guarantees that the described condition makes for a finite proportion of possibility space. I think “Getting a perfect bullseye” is a description of finite complexity but it describes and outcome of (real) 0 probabaility. Being positive is of no guarantee of finitude, infinidesimal chances would spell trouble for the theory. And if statements like “Slider or (near equivalent) gets a perfect bullseye” are disallowed for not being finitely groundable then most references to infinite objects are ruled out anyway. Its not exactly an infinite ethic if it is not allowed to refer to infinite things.
No, my system doesn’t rule out statements of the form, “should I help these infinitely-many persons”. This can have finite complexity, after all, provided there is sufficient regularity in who will be helped. Also, don’t forget, even if you’re just causally helping a single person, you’re still acausally helping infinitely-many people. So, in a sense, ruling out helping infinitely-many people would rule out helping anyone.
I am also slightly worried that “description cuts” will allow “doubling the ball” kind of events where total probability doesn’t get preserved. That phenomenon gets around the theorethical problems by designating some sets non-measurable. But then being a a set doesn’t mean its measurable. I am worried that “descriptions always have a usable probablity” is too lax and will bleed from the edges like a naive assumption that all sets are measurable would.
I’m not sure what specifically you have in mind with respect to doubling the sphere-esque issues. But if your system of probabilistic reasoning doesn’t preserve the total probability when partitioning an event into multiple events, that sounds like a serious problem with your probabilistic reasoning system. I mean, if your reasoning system does this, then it’s not even a probability measure.
If you can prove U⟺V∨W, but the system still says P(U)≠P(V∨W), then you aren’t satisfying one of the basic desiderata that motivated Bayesian probability theory: asking the same question in two different ways should result in the same probability. And V∨W is just another way of asking U.
The “nearby” acausal relatedness gives a certain multiplier (that is transfinite). That multiplier should be the same for all options in that scenario. Then if you have an option that has a finite multiplier and an infinite multipier the “simple” option is “only” infinite overall but the “large” option is “doubly” infinite because each of your likenesses has a infinite impact alone already (plus as an aggregate it would gain a infinite quality that way too).
Now cardinalities don’t really support “doubly infinite” ℵ0+ℵ0 is just ℵ0. However for transfinite values cardinality and ordinality diverge and for example with surreal numbers one could have ω+ω>ω and for relevantly for here ω<ω2 . As I understand there are four kinds of impact A=”direct impact of helping one”, B=”direct impact of helping infinite amount”, C=”acasual impact of choosing ot help 1″ and D=”acausal impact of choosing to help infinite”. You claim that B and C are either equivalent or roughly equivalent and A and B are not. But there is a lurking paralysis if D and C are (roughly) equivalent. By one logic because we prefer B to A then if we “acausalize” this we should still preserve this preference (because “the amount of copies granted” would seem to be even handed), so we would expect to prefer D to C. However in a system where all infinites are of equal size then C=D and we become ambivalent between the options. To me it would seem natural and the boundary conditions are near to forcing that D has just a vast cap to C that B has to A.
In the above “roughly” can be somewhat translated to more precise language as “are within finite multiples away from each other” ie they are not relatively infinite ie they belong to the same archimedean field (helping 1 person or 2 person are not the same but they represent the case of “help fixed finite amount of people”). Within the example it seems we need to identify atleast 3 such fields. Moving within the field is “easy” understood real math. But when you need to move between them, “jump levels” that is less understood. Like a question like “are two fininte numbers equal?” can’t be answered in the abstract but we need to specify the finites (and the result could go either way), knowing that an amount is transfinite only tells about its quality and we still (need/find utility) to ask how big they are.
One way one can avoid the weaknesses of the system is not pinning it down. and another place for the infinities to hide is in the infinidesimals. I have the feeling that the normalization is done slightly different in different turns. Consider peace (don’t do anything) and punch (punch 1 person). As a separate problem this is no biggie. Then consider, dust (throw sand at infinitely many people) and injury (throw sand at infinitely many people and punch 1 person). Here an adhoc analysis might choose a clear winner. Then consider the combined problem where you have all the options, peace, punch, dust and insult. Having 1 analysis that gets applied to all options equally will run into trouble. If the analysis is somehow “turn options into real probablities” then problem with infinidesimals are likely to crop up.
The structural reason is that 2 arcimedian fields can’t be compressed to 1. The problems would be that methods that differentiate between throwing sand or not would gloss over punching or not and methods that differentiate between punching or not blow up for considering sanding or not. Now my liked answer would be “use infinidesimal propablities as real entities” but then I am using something more powerful than real probablities. But that the probalities are in the range of 0 to 1 doesn’t make it “easy” for reals to cope with them. The problems would manifest in being systematically being able to assign different numbers. There could be the “highest impact only” problem of neglecting any smaller scale impact which would assign dust and insult the same number. There could be the “modulo infinity” failure mode where peace and dust get the same number. “One class only” would fail to give numbers for one of the subproblems.
By one logic because we prefer B to A then if we “acausalize” this we should still preserve this preference (because “the amount of copies granted” would seem to be even handed), so we would expect to prefer D to C. However in a system where all infinites are of equal size then C=D and we become ambivalent between the options.
We shouldn’t necessarily prefer D to C. Remember that one of the main things you can do to increase the moral value of the universe is to try to causally help other creatures so that other people who are in sufficiently similar circumstances to you you will also help, so you acausally make them help others. Suppose you instead have the option to instead causally help all of the agents that would have been acausally helped if you just causally help one agent. Then the AI shouldn’t prefer D to C, because the results are identical.
Here an adhoc analysis might choose a clear winner. Then consider the combined problem where you have all the options, peace, punch, dust and insult. Having 1 analysis that gets applied to all options equally will run into trouble. If the analysis is somehow “turn options into real probablities” then problem with infinidesimals are likely to crop up.
Could you explain how this would cause problems? If those are the options, it seems like a clear-but case of my ethical system recommending peace, unless there is some benefit to punching, insulting, or throwing sand you haven’t mentioned.
To see why, if you decide to throw sand, you’re decreasing the satisfaction of agents in situations of the form “Can get sand thrown at them from someone just like Slider”. This would in general decrease the moral value of the world, so my system wouldn’t recommend it. The same reasoning can show that the system wouldn’t recommend punching or insulting.
There could be the “modulo infinity” failure mode where peace and dust get the same number. “One class only” would fail to give numbers for one of the subproblems.
Interesting. Could you elaborate?
I’m not really clear the reason for you are worried about these different classes. Remember that any action you will do will, at least acausally, help a countably infinite number of agents. Similarly, I think all your actions will have some real-valued affect on the moral value of the universe. To see why, just note that as long as you help one agent, then the expected satisfaction of agents in situations of the form, “<description of the circumstances of the above agent> who can be helped by someone just like Slider”. This has finite complexity, and thus real and non-zero probability. And the moral value of the universe is capped at whatever the domain of the life satisfaction measure is, so you can’t have infinite increases to the moral value of the universe, either.
You can’t causally help people without also acausally helping in the same go. Your acausal “influence” forces people matching your description to act the same. Even if it is possible to consider the directly helped and the undirectly helped to be the same they could also be different. In order to be fair we should also extend this to C. What if the person helped by all the acausal copies are in fact the same person? (If there is a proof it can’t be why doesn’t that apply when the patient group is large?)
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route. Suppose I help one person and then there is either a finite or infinite amount of people in my world. Finite impact over finite people leads to a real and finite kick. Finite impact over infinite people leads to a infinidesimal kick. Ah, but acausal copies of the finites! Yeah, but what about the acausal copies of the infinites? When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
One way to slither out of this is to claim that world-predescription-expansion needs to be finite, that there are only a finite configuration of stars until they start to repeat. Then we can drop “directly infinite” worlds and all infinity is because of acausality. So there is no such thing as directly helping infinite amount of people.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate. Fine, we can renormalise the the aggregate to be 1 with a division but that will mean that single agent weights on that average is going to be infinidesimal (and thus not real). If we acausalise then we should do so both for the numerator and the denominator. If we don’t acausalise the denominator then we should still acausalise the nomerator even if we have finite patients (but then we end up with more than finite kick). It is inconsistent if the nudges happen based on bad luck if we are “in the wrong weight class”.
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
Could you explain what insult is supposed to do? You didn’t say what in the previous comment. Does it causally hurt infinitely-many people?
Anyways, it seems to me that my system would not be ambivalent about whether you torture people a little or a lot. Let C be the class of finite descriptions of circumstances of agents in the universe that would get hurt a little or a lot if you decide to hurt them. The probability of an agent ending up in class C is non-zero. But if you decide to torture them a lot their expected life-satisfaction would be much lower than if you decide to torture them a little. Thus, the total moral value of the universe would be lower if you decide to torture a lot rather than a little.
When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
I can’t say I’m following you here. Specifically, how do you consider, “help all the stars” and “all the stars as they could have been” to be different? I thought, “help” meant, “make it better than it otherwise could have been”. I’m also not sure what counts as acausal shadows. I, alas, couldn’t find this phrase used anywhere else online.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate.
Remember that my ethical system doesn’t aggregate anything across all agents in the universe. Instead, it merely considers finite descriptions of situations an agent could be in the universe, and then aggregates the expected value of satisfaction in these situations, weighted by probability conditioning only on being in this universe.
There’s no way for this to be infinite. The probabilities of all the situations sum to 1 (they are assumed to be disjoint), and the measure of life satisfaction was said to be bounded.
And remember, my system doesn’t first find your causal impact on moral value of the universe and then somehow use this to find the acausal impact. Because in our universe, I think the causal impact will always be zero. Instead, just directly worry about acausal impacts. And your acausal impact on the moral value of the universe will always be finite and non-infinitesimal.
Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Remember that if you decide to take a certain action, that implies that other agents who are sufficiently similar to you and in sufficiently similar circumstances also take that action. Thus, you can acausally have non-infinitesimal impact on the satisfaction of agents in situations of the form, “An agent in a world with someone just like Slider who is also in very similar circumstances to Slider’s.” The above scenario is of finite complexity and isn’t ruled out by evidence. Thus, the probability of an agent ending up in such a situation, conditioning only only on being some agent in this universe, is nonzero [and non-infinitesimal].
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata...
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.
Post is pretty long winded,a bit wall fo texty in a lot of text which seems like fixed amount of content while being very claimy and less showy about the properties.
My suspicion is that the acausal impact ends up being infinidesimal anyway. Even if one would get finite probability impact for probabilties concerning a infinite universe for claims like “should I help this one person” then claims like “should I help these infinite persons” would still have an infinity class jump between the statements (even if both need to have an infinite kick into the universe to make a dent there is an additional level to one of these statements and not all infinities are equal).
I am going to anticipate that your scheme will try to rule out statements like “should I help these infinite persons” for a reason like “its not of finite complexity”. I am not convinced that finite complexity descriptions are good guarantees that the described condition makes for a finite proportion of possibility space. I think “Getting a perfect bullseye” is a description of finite complexity but it describes and outcome of (real) 0 probabaility. Being positive is of no guarantee of finitude, infinidesimal chances would spell trouble for the theory. And if statements like “Slider or (near equivalent) gets a perfect bullseye” are disallowed for not being finitely groundable then most references to infinite objects are ruled out anyway. Its not exactly an infinite ethic if it is not allowed to refer to infinite things.
I am also slightly worried that “description cuts” will allow “doubling the ball” kind of events where total probability doesn’t get preserved. That phenomenon gets around the theorethical problems by designating some sets non-measurable. But then being a a set doesn’t mean its measurable. I am worried that “descriptions always have a usable probablity” is too lax and will bleed from the edges like a naive assumption that all sets are measurable would.
I feel at a loss for ebign able to spell out my worries. It is mainly about being murky making it possible to hide undefidedness. As an analog one could think of people trying to formulate calculus. There is a way of thinking about it where you make tiny infinidesimal triangles and measure their properties. In order for the side length to be “sane” the sides fo the triangle need both be “small”. If you had a triangle that was finite in length in one side and infinidesimal in one side then the angle of the remaining side is likely to be something “wild”. If you “properly take the limits” then you can essentially forget that you are in infinidesimal realm (or one that can be made analogous to one) but checking for that “properness” forgetfulness doesn’t help with.
Yeah, I see what you mean. I have a hard time balancing between being succinct and providing sufficient support and detail. It actually used to be shorter, but I lengthened it to address concerns brought up a review.
Could you elaborate what you mean by a class jump?
Remember that if you ask, “should I help this one person”, that is another way of saying, “should I (acausally) help this infinite class of people in similar circumstances”. And I think in general the cardinality of this infinity would be the same as the cardinality of people helped by considering “should I help these infinitely-many persons”
Most likely the number of people in this universe is countably infinite, and all situations are repeated infinitely-many times. Thus, asking, “should I help this one person” would acausally help ℵ0 people, and so would causally helping the infinitely-many people.
No, my system doesn’t rule out statements of the form, “should I help these infinitely-many persons”. This can have finite complexity, after all, provided there is sufficient regularity in who will be helped. Also, don’t forget, even if you’re just causally helping a single person, you’re still acausally helping infinitely-many people. So, in a sense, ruling out helping infinitely-many people would rule out helping anyone.
I’m not sure what specifically you have in mind with respect to doubling the sphere-esque issues. But if your system of probabilistic reasoning doesn’t preserve the total probability when partitioning an event into multiple events, that sounds like a serious problem with your probabilistic reasoning system. I mean, if your reasoning system does this, then it’s not even a probability measure.
If you can prove U⟺V∨W, but the system still says P(U)≠P(V∨W), then you aren’t satisfying one of the basic desiderata that motivated Bayesian probability theory: asking the same question in two different ways should result in the same probability. And V∨W is just another way of asking U.
The “nearby” acausal relatedness gives a certain multiplier (that is transfinite). That multiplier should be the same for all options in that scenario. Then if you have an option that has a finite multiplier and an infinite multipier the “simple” option is “only” infinite overall but the “large” option is “doubly” infinite because each of your likenesses has a infinite impact alone already (plus as an aggregate it would gain a infinite quality that way too).
Now cardinalities don’t really support “doubly infinite” ℵ0+ℵ0 is just ℵ0. However for transfinite values cardinality and ordinality diverge and for example with surreal numbers one could have ω+ω>ω and for relevantly for here ω<ω2 . As I understand there are four kinds of impact A=”direct impact of helping one”, B=”direct impact of helping infinite amount”, C=”acasual impact of choosing ot help 1″ and D=”acausal impact of choosing to help infinite”. You claim that B and C are either equivalent or roughly equivalent and A and B are not. But there is a lurking paralysis if D and C are (roughly) equivalent. By one logic because we prefer B to A then if we “acausalize” this we should still preserve this preference (because “the amount of copies granted” would seem to be even handed), so we would expect to prefer D to C. However in a system where all infinites are of equal size then C=D and we become ambivalent between the options. To me it would seem natural and the boundary conditions are near to forcing that D has just a vast cap to C that B has to A.
In the above “roughly” can be somewhat translated to more precise language as “are within finite multiples away from each other” ie they are not relatively infinite ie they belong to the same archimedean field (helping 1 person or 2 person are not the same but they represent the case of “help fixed finite amount of people”). Within the example it seems we need to identify atleast 3 such fields. Moving within the field is “easy” understood real math. But when you need to move between them, “jump levels” that is less understood. Like a question like “are two fininte numbers equal?” can’t be answered in the abstract but we need to specify the finites (and the result could go either way), knowing that an amount is transfinite only tells about its quality and we still (need/find utility) to ask how big they are.
One way one can avoid the weaknesses of the system is not pinning it down. and another place for the infinities to hide is in the infinidesimals. I have the feeling that the normalization is done slightly different in different turns. Consider peace (don’t do anything) and punch (punch 1 person). As a separate problem this is no biggie. Then consider, dust (throw sand at infinitely many people) and injury (throw sand at infinitely many people and punch 1 person). Here an adhoc analysis might choose a clear winner. Then consider the combined problem where you have all the options, peace, punch, dust and insult. Having 1 analysis that gets applied to all options equally will run into trouble. If the analysis is somehow “turn options into real probablities” then problem with infinidesimals are likely to crop up.
The structural reason is that 2 arcimedian fields can’t be compressed to 1. The problems would be that methods that differentiate between throwing sand or not would gloss over punching or not and methods that differentiate between punching or not blow up for considering sanding or not. Now my liked answer would be “use infinidesimal propablities as real entities” but then I am using something more powerful than real probablities. But that the probalities are in the range of 0 to 1 doesn’t make it “easy” for reals to cope with them. The problems would manifest in being systematically being able to assign different numbers. There could be the “highest impact only” problem of neglecting any smaller scale impact which would assign dust and insult the same number. There could be the “modulo infinity” failure mode where peace and dust get the same number. “One class only” would fail to give numbers for one of the subproblems.
We shouldn’t necessarily prefer D to C. Remember that one of the main things you can do to increase the moral value of the universe is to try to causally help other creatures so that other people who are in sufficiently similar circumstances to you you will also help, so you acausally make them help others. Suppose you instead have the option to instead causally help all of the agents that would have been acausally helped if you just causally help one agent. Then the AI shouldn’t prefer D to C, because the results are identical.
Could you explain how this would cause problems? If those are the options, it seems like a clear-but case of my ethical system recommending peace, unless there is some benefit to punching, insulting, or throwing sand you haven’t mentioned.
To see why, if you decide to throw sand, you’re decreasing the satisfaction of agents in situations of the form “Can get sand thrown at them from someone just like Slider”. This would in general decrease the moral value of the world, so my system wouldn’t recommend it. The same reasoning can show that the system wouldn’t recommend punching or insulting.
Interesting. Could you elaborate?
I’m not really clear the reason for you are worried about these different classes. Remember that any action you will do will, at least acausally, help a countably infinite number of agents. Similarly, I think all your actions will have some real-valued affect on the moral value of the universe. To see why, just note that as long as you help one agent, then the expected satisfaction of agents in situations of the form, “<description of the circumstances of the above agent> who can be helped by someone just like Slider”. This has finite complexity, and thus real and non-zero probability. And the moral value of the universe is capped at whatever the domain of the life satisfaction measure is, so you can’t have infinite increases to the moral value of the universe, either.
You can’t causally help people without also acausally helping in the same go. Your acausal “influence” forces people matching your description to act the same. Even if it is possible to consider the directly helped and the undirectly helped to be the same they could also be different. In order to be fair we should also extend this to C. What if the person helped by all the acausal copies are in fact the same person? (If there is a proof it can’t be why doesn’t that apply when the patient group is large?)
The integactions are all supposed to be negative in peace, punch, dust, insult. The surprising thing to me would be that the system would be ambivalent between sand and insult being a bad idea. If we don’t necceasrily prefer D to C when helping does it matter if we torture our people a lot or a little as its going to get infinity saturated anyway.
The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route. Suppose I help one person and then there is either a finite or infinite amount of people in my world. Finite impact over finite people leads to a real and finite kick. Finite impact over infinite people leads to a infinidesimal kick. Ah, but acausal copies of the finites! Yeah, but what about the acausal copies of the infinites? When I say “world has finite or infinite people” that is “within description” say that there are infinite people because I believe there are infinitely many stars. Then all the acausal copies of sol are going to have their own “out there” stars. Acts that “help all the stars” and “all the stars as they could have been” are different. Atleast until we consider that any agent that decides to “help all the stars” will have acausal shadows “that could have been”. But still this consideration increases the impact on the multiverse (or keeps it the same if moving from a monoverse to a multiverse in the same step).
One way to slither out of this is to claim that world-predescription-expansion needs to be finite, that there are only a finite configuration of stars until they start to repeat. Then we can drop “directly infinite” worlds and all infinity is because of acausality. So there is no such thing as directly helping infinite amount of people.
If I have real, non-zero impacts for infinite amount of people naively that would add up to a more than finite aggregate. Fine, we can renormalise the the aggregate to be 1 with a division but that will mean that single agent weights on that average is going to be infinidesimal (and thus not real). If we acausalise then we should do so both for the numerator and the denominator. If we don’t acausalise the denominator then we should still acausalise the nomerator even if we have finite patients (but then we end up with more than finite kick). It is inconsistent if the nudges happen based on bad luck if we are “in the wrong weight class”.
Could you explain what insult is supposed to do? You didn’t say what in the previous comment. Does it causally hurt infinitely-many people?
Anyways, it seems to me that my system would not be ambivalent about whether you torture people a little or a lot. Let C be the class of finite descriptions of circumstances of agents in the universe that would get hurt a little or a lot if you decide to hurt them. The probability of an agent ending up in class C is non-zero. But if you decide to torture them a lot their expected life-satisfaction would be much lower than if you decide to torture them a little. Thus, the total moral value of the universe would be lower if you decide to torture a lot rather than a little.
I can’t say I’m following you here. Specifically, how do you consider, “help all the stars” and “all the stars as they could have been” to be different? I thought, “help” meant, “make it better than it otherwise could have been”. I’m also not sure what counts as acausal shadows. I, alas, couldn’t find this phrase used anywhere else online.
Remember that my ethical system doesn’t aggregate anything across all agents in the universe. Instead, it merely considers finite descriptions of situations an agent could be in the universe, and then aggregates the expected value of satisfaction in these situations, weighted by probability conditioning only on being in this universe.
There’s no way for this to be infinite. The probabilities of all the situations sum to 1 (they are assumed to be disjoint), and the measure of life satisfaction was said to be bounded.
And remember, my system doesn’t first find your causal impact on moral value of the universe and then somehow use this to find the acausal impact. Because in our universe, I think the causal impact will always be zero. Instead, just directly worry about acausal impacts. And your acausal impact on the moral value of the universe will always be finite and non-infinitesimal.
Insult is when you do both punch and dust ie make a negative impact on infinite amotun of people and an additional negative impact on a single person. If degree of torture matters then dusting and punching the same person would be relevant. I guess the theory per se would treat it differntly if the punched person was not one of the dusted ones.
“doesn’t aggregate anything”—“aggregates the expected value of satisfaction in these situations”
When we form the expecation what is going to happen in the descriped situation I imagine breaking it down into sad stories and good stories. The expectation sways upwards if ther are more good stories and downwards if there are more bad stories. My life will turn out somehow which can differ from my “storymates” outcomes. I didn’t try to hit any special term but just refer to the cases the probabilities of the stories refer to.
Thanks for clearing some things up. There are still some things I don’t follow, though.
You said my system would be ambivalent between between sand and insult. I just wanted to make sure I understand what you’re saying here. Is insult specifically throwing sand at the same people that get it thrown at in dust, and get the sand amount of sand thrown at them at the same throwing speed? If so, then it seems to me that my system would clearly prefer sand to insult. This is because there in some non-zero chance of an agent, conditioning only on being in this universe, being punched due to people like me choosing insult. This would make their satisfaction lower than it otherwise would be, thus decreasing the moral value of the universe if I chose insult over sand.
On the other hand, perhaps the people harmed by sand from “insult” would be lower than the number harmed by sand in “dust”. In this situation, my ethical system could potentially prefer insult over dust. This doesn’t seem like a bad thing to me, though, if it means you save some agents in certain agent-situation-descriptions from getting sand thrown at them.
Also, I’m wondering about your paragraph starting with, “The basic sitatuino is that I have intuitions which I can’t formulate that well. I will try another route.” If I’m understanding it correctly, I think I more or less agree with what you said in that paragraph. But I’m having a hard time understanding the significance of it. Are you intending to show a potential problem with my ethical system using it? The paragraph after it makes it seem like you were, but I’m not really sure.
Yes, insult is supposed to add to the injury.
Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?. Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by ϵ. So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
No, I’m still not sure what you mean by this.
In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.
If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of
In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the “big league” issues.
Could you explain why you think so? I had already explained why ϵ would be real, so I’m wondering if you had an issue with my reasoning. To quote my past self:
Just to remind you, my ethical system basically never needs to worry about finite impacts. My ethical system doesn’t worry about causal impacts, except to the extent that the inform you about the total acausal impact of your actions on the moral value of the universe. All things you do have infinite acausal impact, and these are all my system needs to consider. To use my ethical system, you don’t even need a notion of causal impact at all.