The actual reality does not have high level objects such as nematodes or humans.
Before one could even consider an utility of a human (or a nematode) ’s existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it’s value, and so on.
What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven’t been diced into perfect cubic blocks and then rearranged like a Rubik’s cube.
This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.
What I imagine that function would do, is identify existence of particular computational structures of interest in the region of space, and there are many such structures inside a human head that do not exist in any region of space occupied by nematodes, which have a much smaller set of structures with extra nematodes not adding any new structures (unlike humans who, due to distinct memories and the different ways their brains are arranged, do add new structures, linearly up to a fairly large number).
So even a very large region of spacetime full of nematodes and one human can have it’s utility decreased a lot more by random rearrangements of the atoms (quarks, what ever the bottom level is—does not matter) constituting a human than by random rearrangements of the atoms constituting nematodes.
edit: that is, as long as there’s enough nematodes to cover the entire nematode experience space (which is quite small), increases in their number won’t add to computational structure of the whole region. Something that’s not true for people, up to a really very large number of people.
The actual reality does not have high level objects such as nematodes or humans.
Um… yes, it does. “Reality” doesn’t conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.
Before one could even consider an utility of a human (or a nematode) ’s existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it’s value, and so on.
No. Utility is a thing agents have. “Utility theory” is a thing you use to compute an agent’s desired action; it is therefore a thing that only intelligent agents have. Space doesn’t have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.
Before one could even consider an utility of a human (or a nematode) ’s existence
No. Utility is a thing agents have.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I think there’s some linguistic confusion here. As an agent valuing that there’s no enormous torture camp set up in a region of space, I’d need to have an utility function over space, which gives the utility of that space.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I see what you’re doing, then. I’m thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You’re thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.
Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.
I don’t understand your overall point. It sounds to me like you’re taking a long way around to agreeing with me, yet phrasing it as if you disagreed.
I think (and private_messaging should feel free to correct me if I’m wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you’ve got to be able to recognize those objects/worldstates/whatever. I may value “humans”, but what is a “human”? Since the actual reality doesn’t have a “human” as an ontologically fundamental category—it simply computes the behavior of particles according to the laws of physics—the definition of the “human” which I assign utility to must be given by me. I’m not going to get the definition of a “human” from the universe itself.
What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven’t been diced into perfect cubic blocks and then rearranged like a Rubik’s cube.
I’m not entirely sure what the point of this comment was, but in that case, surely the problem occurs when said chunks die? I mean, if they magically kept working the same way, linking telepathically with the other chunks and processing information perfecty well, I don’t see why they wouldn’t be just as valuable, albeit rather grisly looking.
Finding out that the chunks will die (given the laws of physics as they are) is something that the function in question got to do. Likewise, finding out that they won’t die with some magic, but would die if they weren’t rearranged and the magic was applied (portal-ing the blood all over the place).
You just keep jumping to making an utility that is computed from the labels you already assign to the world.
edit: one could also subdivide it into very small regions of space, and note that you can’t compute any kind of utility of the whole by going over every piece in isolation and then summing.
edit2: to be exact, I am counter-exampling the f(ab)=f(a)+f(b) (where “ab” is a concatenated with b) with f(ab)!=f(ba) and a+b=b+a .
More broadly, mathematics1 has been very useful in science, and so ethicists try to use mathematics2 . Where mathematics1 is a serious discipline where one states assumptions and progresses formally, and mathematics2 is “there must be arithmetical operations involved” or even “it is some kind of Elvish” . (while mathematics1 doesn’t get you very far because we can’t make many assumptions)
The actual reality does not have high level objects such as nematodes or humans.
Before one could even consider an utility of a human (or a nematode) ’s existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it’s value, and so on.
What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven’t been diced into perfect cubic blocks and then rearranged like a Rubik’s cube.
This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.
What I imagine that function would do, is identify existence of particular computational structures of interest in the region of space, and there are many such structures inside a human head that do not exist in any region of space occupied by nematodes, which have a much smaller set of structures with extra nematodes not adding any new structures (unlike humans who, due to distinct memories and the different ways their brains are arranged, do add new structures, linearly up to a fairly large number).
So even a very large region of spacetime full of nematodes and one human can have it’s utility decreased a lot more by random rearrangements of the atoms (quarks, what ever the bottom level is—does not matter) constituting a human than by random rearrangements of the atoms constituting nematodes.
edit: that is, as long as there’s enough nematodes to cover the entire nematode experience space (which is quite small), increases in their number won’t add to computational structure of the whole region. Something that’s not true for people, up to a really very large number of people.
Um… yes, it does. “Reality” doesn’t conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.
No. Utility is a thing agents have. “Utility theory” is a thing you use to compute an agent’s desired action; it is therefore a thing that only intelligent agents have. Space doesn’t have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I think there’s some linguistic confusion here. As an agent valuing that there’s no enormous torture camp set up in a region of space, I’d need to have an utility function over space, which gives the utility of that space.
I see what you’re doing, then. I’m thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You’re thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.
Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.
I don’t understand your overall point. It sounds to me like you’re taking a long way around to agreeing with me, yet phrasing it as if you disagreed.
I think (and private_messaging should feel free to correct me if I’m wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you’ve got to be able to recognize those objects/worldstates/whatever. I may value “humans”, but what is a “human”? Since the actual reality doesn’t have a “human” as an ontologically fundamental category—it simply computes the behavior of particles according to the laws of physics—the definition of the “human” which I assign utility to must be given by me. I’m not going to get the definition of a “human” from the universe itself.
Okay. I don’t understand his point, then. That doesn’t seem relevant to what I was saying.
I’m not entirely sure what the point of this comment was, but in that case, surely the problem occurs when said chunks die? I mean, if they magically kept working the same way, linking telepathically with the other chunks and processing information perfecty well, I don’t see why they wouldn’t be just as valuable, albeit rather grisly looking.
Finding out that the chunks will die (given the laws of physics as they are) is something that the function in question got to do. Likewise, finding out that they won’t die with some magic, but would die if they weren’t rearranged and the magic was applied (portal-ing the blood all over the place).
You just keep jumping to making an utility that is computed from the labels you already assign to the world.
edit: one could also subdivide it into very small regions of space, and note that you can’t compute any kind of utility of the whole by going over every piece in isolation and then summing.
edit2: to be exact, I am counter-exampling the f(ab)=f(a)+f(b) (where “ab” is a concatenated with b) with f(ab)!=f(ba) and a+b=b+a .
More broadly, mathematics1 has been very useful in science, and so ethicists try to use mathematics2 . Where mathematics1 is a serious discipline where one states assumptions and progresses formally, and mathematics2 is “there must be arithmetical operations involved” or even “it is some kind of Elvish” . (while mathematics1 doesn’t get you very far because we can’t make many assumptions)