I don’t think, in general, there could be a way to compare ‘strength of feeling’, etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism’s resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.
It seems plausible to me that there is more to ‘bliss’ than one’s level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it’s fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree.
Not that a machine couldn’t experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don’t think the nematode’s 302 neurons encode that.
Yes, I agree with you (and likely this was Eliezer’s point) that nematodes likely don’t have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to ‘bliss’. But this would be because their systems aren’t complex enough to have that particular feeling, not because they don’t feel strongly enough.
… A car’s gas gauge must feel very strongly that it either has enough gas or doesn’t have enough gas, but the feeling isn’t very interesting. (And I don’t mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a ‘feeling’.)
I expect that, as we learn enough about neuroscience to begin to answer this, we’ll substitute “feels more strongly” with some other criteria on which humans come out definitively on top.
I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
I don’t think, in general, there could be a way to compare ‘strength of feeling’, etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism’s resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.
It seems plausible to me that there is more to ‘bliss’ than one’s level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it’s fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree.
Not that a machine couldn’t experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don’t think the nematode’s 302 neurons encode that.
Yes, I agree with you (and likely this was Eliezer’s point) that nematodes likely don’t have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to ‘bliss’. But this would be because their systems aren’t complex enough to have that particular feeling, not because they don’t feel strongly enough.
… A car’s gas gauge must feel very strongly that it either has enough gas or doesn’t have enough gas, but the feeling isn’t very interesting. (And I don’t mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a ‘feeling’.)
Going back and re-reading ciphergoth’s comment above, I now see why you’re emphasizing strength of feeling. What you said makes sense, point conceded.
I expect that, as we learn enough about neuroscience to begin to answer this, we’ll substitute “feels more strongly” with some other criteria on which humans come out definitively on top.
I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.