It’s not a model. It’s a moral question about a simplified agent.
Um, the first sentence says:
In this post, I’ll describe a simple agent, a toy model,
I’m trying to point out that because of the model’s anthropomorphic (man, I hate trying to spell that word) tendencies, it would be a bad idea to try to draw moral conclusions from it.
It’d be an argument from confusion, because it just substitutes two homunculi (yay, a word I hate spelling even worse) for a human being, instead of actually reducing anything.
A correctly reductive model of human behavior needs to take into account that there is very little besides language in human behavior that is unique to humans… and that means that most of what we’re doing can be done by animals lacking in sentience. It would be a grave error to therefore conceive of the “slave” as being an individual, rather than a relatively minor set of add-on modules.
The question of whose preferences are “real” in that case is a confusion akin to asking how we can have free will if the universe is deterministic. That is, it’s yet another projection of our native/naive anthropomorphism—the inclination to label things as agents.
You can see this in the other part of the thread where you’re talking about what master and slave aliens would “want”—modeling these things as “wanting” is where the anthropomorphic injection is occurring. (E.g., in a human-accurate model, abstract wanting is not something the “master” would be capable of, as symbolic abstraction is the near-exclusive domain of the “slave”.)
I agree about substituting two homunculi for one without reducing anything being something to avoid, that the model in this post does it, and that pjeby’s model does not.
Um, the first sentence says:
I’m trying to point out that because of the model’s anthropomorphic (man, I hate trying to spell that word) tendencies, it would be a bad idea to try to draw moral conclusions from it.
It’d be an argument from confusion, because it just substitutes two homunculi (yay, a word I hate spelling even worse) for a human being, instead of actually reducing anything.
A correctly reductive model of human behavior needs to take into account that there is very little besides language in human behavior that is unique to humans… and that means that most of what we’re doing can be done by animals lacking in sentience. It would be a grave error to therefore conceive of the “slave” as being an individual, rather than a relatively minor set of add-on modules.
The question of whose preferences are “real” in that case is a confusion akin to asking how we can have free will if the universe is deterministic. That is, it’s yet another projection of our native/naive anthropomorphism—the inclination to label things as agents.
You can see this in the other part of the thread where you’re talking about what master and slave aliens would “want”—modeling these things as “wanting” is where the anthropomorphic injection is occurring. (E.g., in a human-accurate model, abstract wanting is not something the “master” would be capable of, as symbolic abstraction is the near-exclusive domain of the “slave”.)
I agree about substituting two homunculi for one without reducing anything being something to avoid, that the model in this post does it, and that pjeby’s model does not.