I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
I think the question of whether insects have preferences in morally pretty important, so I’m interested in hearing what made you think they do have them.
I looked online for “do insects have preferences?”, and I saw articles saying they did. I couldn’t really figure out why they thought they did have them, though.
For example, I read that insects have a preference for eating green leaves over red ones. But I’m not really sure how people could have known this. If you see ants go to green leaves when they’re hungry instead of red leaves, this doesn’t seem like it would necessarily be due to any actual preferences. For example, maybe the ant just executed something like the code:
if near_green_leaf() and is_hungry:
go_to_green_leaf()
elif near_red_leaf() and is_hungry:
go_to_red_leaf()
else:
...
That doesn’t really look like actual preferences to me. But I suppose this to some extent comes down to how you want to define what counts as a preference. I took preferences to actually be orderings between possible worlds indicating which one is more desirable. Did you have some other idea of what counts as preferences?
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I agree that to some extent their extrapolated satisfactions will come down to the specifics of the extrapolated procedure.
I don’t us to get too distracted here, though. I don’t have a rigorous, non-arbitrary specification of what an agent’s extrapolated preferences are. However, that isn’t the problem I was trying to solve, nor is it a problem specific to my ethical system. My system is intended to provide a method of coming to reasonable moral conclusions in an infinite universe. And it seems to me that it does so. But, I’m very interested in any other thoughts you have on it with respect to if it correctly handles moral recommendations in infinite worlds. Does it seem to be reasonable to you? I’d like to make an actual post about this, with the clarifications we made included.
I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I think the question of whether insects have preferences in morally pretty important, so I’m interested in hearing what made you think they do have them.
I looked online for “do insects have preferences?”, and I saw articles saying they did. I couldn’t really figure out why they thought they did have them, though.
For example, I read that insects have a preference for eating green leaves over red ones. But I’m not really sure how people could have known this. If you see ants go to green leaves when they’re hungry instead of red leaves, this doesn’t seem like it would necessarily be due to any actual preferences. For example, maybe the ant just executed something like the code:
That doesn’t really look like actual preferences to me. But I suppose this to some extent comes down to how you want to define what counts as a preference. I took preferences to actually be orderings between possible worlds indicating which one is more desirable. Did you have some other idea of what counts as preferences?
I agree that to some extent their extrapolated satisfactions will come down to the specifics of the extrapolated procedure.
I don’t us to get too distracted here, though. I don’t have a rigorous, non-arbitrary specification of what an agent’s extrapolated preferences are. However, that isn’t the problem I was trying to solve, nor is it a problem specific to my ethical system. My system is intended to provide a method of coming to reasonable moral conclusions in an infinite universe. And it seems to me that it does so. But, I’m very interested in any other thoughts you have on it with respect to if it correctly handles moral recommendations in infinite worlds. Does it seem to be reasonable to you? I’d like to make an actual post about this, with the clarifications we made included.