It’s not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.
Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being’s life in torture.
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
By our universe you do not mean only the observable universe, but include the level I multiverse
then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.
Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don’t see why I should change it. “I don’t like the conclusions!!!” is not a valid objection.
If people in charge reasoned that way we might have harmageddon in no time.
If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we’ll have larger problems than the potential nuking of New York.
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.
Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however… Should we:
a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?
or should we
b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
The trouble is that there is a continuous sequence from
Take $1 from everyone
Take $1.01 from almost everyone
Take $1.02 from almost almost everyone
...
Take a lot of money from very few people (Denmark)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. (emphasis mine)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
I don´t follow the sequence because I don´t know where the critical limit is.
You may not know exactly where the limit is, but the point isn’t that the limit is at some exact number, the point is that there is a limit. There’s some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don’t know it).
But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
To answer that, well yes it MIGHT be the case, I don´t know, therefore I only ask for 1 dollar. Is that making it any clearer?
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can’t just answer that that “might” be the case—if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
What do you mean with “whatever the actual numbers are”. Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don’t know it. Pretend I put the real number there instead of 20.
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don’t know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
Yes, in my last comment I agreed to it. There is such a number. I don’t think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
the number for the amount of money that can be taken without ruining anyone
So you’re saying there exists such a number, such that taking that amount of money from someone wouldn’t ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
(Of course, it’s not always clear which choice the answer is—hence why so many argue over it—but the answer has to be, in principle, either “yes” or “no”.)
No, because temperature is (very close to) a continuum, whereas good/bad is a binary.
First, I’m not talking about temperature, but about categories “hot” and “cold”.
Second, why in the world would good/bad be binary?
“Would an omniscient, moral person choose to take this action?”
I have no idea—I don’t know what an omniscient person (aka God) will do, and in any case the answer is likely to be “depends on which morality we are talking about”.
Oh, and would an omniscient being call that water hot or cold?
First, I’m not talking about temperature, but about categories “hot” and “cold”.
You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” exclusively; I also use terms like “warm” or “cool” or “this might be a great temperature for a swimming pool, but it’s horrible for tea”.)
Also, if you weren’t talking about temperature, why bother mentioning degrees Celsius when talking about “hotness” and “coldness”? Clearly temperature has something to do with it, or else you wouldn’t have mentioned it, right?
Second, why in the world would good/bad be binary?
Because you can always replace a question of goodness with the question “Would an omniscient, moral person choose to take this action?”.
I have no idea—I don’t know what an omniscient person (aka God) will do,
Just because you have no idea what the answer could be doesn’t mean the true answer can fall outside the possible space of answers. For instance, you can’t answer the question of “Would an omniscient moral reasoner choose to take this action?” with something like “fish”, because that falls outside of the answer space. In fact, there are only two possible answers: “yes” or “no”. It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either “yes or “no”, and that holds true even if you don’t know what the answer is.
the answer is likely to be “depends on which morality we are talking about”
There is only one “morality” as far as this discussion is concerned. There might be other “moralities” held by aliens or whatever, but the human CEV is just that: the human CEV. I don’t care about what the Babyeaters think is “moral”, or the Pebblesorters, or any other alien species you care to substitute—I am human, and so are the other participants in this discussion. The answer to the question “which morality are we talking about?” is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I’d rather talk game theory with Clippy than morality—it’s far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
In fact, there are only two possible answers: “yes” or “no”
I don’t think so.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
There is only one “morality” for the participants of this discussion.
Really? Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
but the human CEV is just that: the human CEV
I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
Name a third alternative that is actually an answer, as opposed to some sort of evasion (“it depends”), and I’ll concede the point.
Also, I’m aware that this isn’t your main point, but… how is the argument circular? I’m not saying something like, “It’s binary, therefore there are two possible states, therefore it’s binary”; I’m just saying “There are two possible states, therefore it’s binary.”
Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
Are you human? (y/n)
I have no idea what the human CEV it and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
Which part do you object to? The “coherent” part, the “extrapolated” part, or the “volition” part?
Name a third alternative that is actually an answer
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Which part do you object to?
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
would an omniscient perfectly moral being scratch his/her/its butt?
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally?
Conditional on having a good definition of “action” and on having a good definition of “morally”.
you can generalize to uncertain situations simply by applying probability theory
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
isn’t of much use to most people
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
The assumption that morality boils down to utility is a rather huge assumption :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
on having a good definition of “morally”
Agree.
An omniscient being has no risk and no risk aversion, for example.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Also, why the heck do you think there exist words for “better” and “worse”?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
In this case I do not disagree with you. The number of people on earth is simply not large enough.
But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.
Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don’t think you will change my mind.
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.
If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]
I think Okeymaker was actually referring to all the people in the universe. While the number of “people” in the universe (defining a “person” as a conscious mind) isn’t a known number, let’s do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn’t nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker’s modus tollens and reply that I would prefer to nuke New York.)
Now, do you have any actual argument as to why the ‘badness’ function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don’t think you do. This is why this stuff strikes me as pseudomath. You don’t even state your premises let alone justify them.
You’re right, I don’t. And I do not really need it in this case.
What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “torture for 50 years” and “dust specks” so this generally makes sense at all.
The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no matter how many more people suffer.”
If however the number of possible distinct people should be finite—even after taking into account level II and level III multiverses—due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there’s only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don’t think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn’t feel any stronger because there’s more ‘copies’ of it running in perfect unison, it can’t even tell the difference. It won’t affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it’s going to keep growing without a limit, but that’s simply not true.
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,… such that C(dustspeck,m_j) > jε.
Besides which, even if I had somehow messed up, you’re not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
Well, in my view, some details of implementation of a computation are totally indiscernible ‘from the inside’ and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I’m not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they’re ‘important’, there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
It’s not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.
Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being’s life in torture.
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
If
Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
By our universe you do not mean only the observable universe, but include the level I multiverse
then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.
Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don’t see why I should change it. “I don’t like the conclusions!!!” is not a valid objection.
If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we’ll have larger problems than the potential nuking of New York.
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.
Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however… Should we:
a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?
or should we
b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
The trouble is that there is a continuous sequence from
Take $1 from everyone
Take $1.01 from almost everyone
Take $1.02 from almost almost everyone
...
Take a lot of money from very few people (Denmark)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
Typo here?
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
You may not know exactly where the limit is, but the point isn’t that the limit is at some exact number, the point is that there is a limit. There’s some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
Yes I do.
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don’t know it).
But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
Maybe I didn´t understand you the first time.
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can’t just answer that that “might” be the case—if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
What do you mean with “whatever the actual numbers are”. Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don’t know it. Pretend I put the real number there instead of 20.
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
I don’t think you understand.
Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don’t know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
Yes, in my last comment I agreed to it. There is such a number. I don’t think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
So you’re saying there exists such a number, such that taking that amount of money from someone wouldn’t ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
(Of course, it’s not always clear which choice the answer is—hence why so many argue over it—but the answer has to be, in principle, either “yes” or “no”.)
First, I’m not talking about temperature, but about categories “hot” and “cold”.
Second, why in the world would good/bad be binary?
I have no idea—I don’t know what an omniscient person (aka God) will do, and in any case the answer is likely to be “depends on which morality we are talking about”.
Oh, and would an omniscient being call that water hot or cold?
You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” exclusively; I also use terms like “warm” or “cool” or “this might be a great temperature for a swimming pool, but it’s horrible for tea”.)
Also, if you weren’t talking about temperature, why bother mentioning degrees Celsius when talking about “hotness” and “coldness”? Clearly temperature has something to do with it, or else you wouldn’t have mentioned it, right?
Because you can always replace a question of goodness with the question “Would an omniscient, moral person choose to take this action?”.
Just because you have no idea what the answer could be doesn’t mean the true answer can fall outside the possible space of answers. For instance, you can’t answer the question of “Would an omniscient moral reasoner choose to take this action?” with something like “fish”, because that falls outside of the answer space. In fact, there are only two possible answers: “yes” or “no”. It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either “yes or “no”, and that holds true even if you don’t know what the answer is.
There is only one “morality” as far as this discussion is concerned. There might be other “moralities” held by aliens or whatever, but the human CEV is just that: the human CEV. I don’t care about what the Babyeaters think is “moral”, or the Pebblesorters, or any other alien species you care to substitute—I am human, and so are the other participants in this discussion. The answer to the question “which morality are we talking about?” is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I’d rather talk game theory with Clippy than morality—it’s far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
I don’t think so.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
Really? Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
Name a third alternative that is actually an answer, as opposed to some sort of evasion (“it depends”), and I’ll concede the point.
Also, I’m aware that this isn’t your main point, but… how is the argument circular? I’m not saying something like, “It’s binary, therefore there are two possible states, therefore it’s binary”; I’m just saying “There are two possible states, therefore it’s binary.”
Are you human? (y/n)
Which part do you object to? The “coherent” part, the “extrapolated” part, or the “volition” part?
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
Conditional on having a good definition of “action” and on having a good definition of “morally”.
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
Agree.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
But they have a consequence: Morality currently is not useful for practical purposes.
That’s… an interesting position. Are you willing to live with it? X-)
You can, of course define morality in this particular way, but why would you do that?
By that definition, almost all actions are bad.
Also, why the heck do you think there exist words for “better” and “worse”?
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
In this case I do not disagree with you. The number of people on earth is simply not large enough.
But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.
Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don’t think you will change my mind.
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.
If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
I think Okeymaker was actually referring to all the people in the universe. While the number of “people” in the universe (defining a “person” as a conscious mind) isn’t a known number, let’s do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn’t nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker’s modus tollens and reply that I would prefer to nuke New York.)
Now, do you have any actual argument as to why the ‘badness’ function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don’t think you do. This is why this stuff strikes me as pseudomath. You don’t even state your premises let alone justify them.
You’re right, I don’t. And I do not really need it in this case.
What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “torture for 50 years” and “dust specks” so this generally makes sense at all.
The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no matter how many more people suffer.”
If however the number of possible distinct people should be finite—even after taking into account level II and level III multiverses—due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there’s only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don’t think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn’t feel any stronger because there’s more ‘copies’ of it running in perfect unison, it can’t even tell the difference. It won’t affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it’s going to keep growing without a limit, but that’s simply not true.
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.
No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,… such that C(dustspeck,m_j) > jε.
Besides which, even if I had somehow messed up, you’re not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
Well, in my view, some details of implementation of a computation are totally indiscernible ‘from the inside’ and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I’m not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they’re ‘important’, there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.