Well, nematodes might already feel more strongly. If you have a total of 302 neurons, and 15 of them signal “YUM!” when you bite into a really tasty protozoan, that might be pure bliss.
I don’t think, in general, there could be a way to compare ‘strength of feeling’, etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism’s resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.
It seems plausible to me that there is more to ‘bliss’ than one’s level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it’s fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree.
Not that a machine couldn’t experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don’t think the nematode’s 302 neurons encode that.
Yes, I agree with you (and likely this was Eliezer’s point) that nematodes likely don’t have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to ‘bliss’. But this would be because their systems aren’t complex enough to have that particular feeling, not because they don’t feel strongly enough.
… A car’s gas gauge must feel very strongly that it either has enough gas or doesn’t have enough gas, but the feeling isn’t very interesting. (And I don’t mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a ‘feeling’.)
I expect that, as we learn enough about neuroscience to begin to answer this, we’ll substitute “feels more strongly” with some other criteria on which humans come out definitively on top.
I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
That seems doable, if you trick the AI into tearing apart a simulation before it figures out it’s in one.
But how do you test whether the AI weighted the nematodes so highly because their qualia are extra phenomenologically vivid, and not because their qualia are extra phenomenologically clipperiffic?
I suspect we’d have to know a lot more about neuroscience and consciousness to define “feel more strongly” precisely enough for the question to have an answer. I also suspect that, if the answer doesn’t come out the way we want it to, we’ll substitute another question in its place that does, in the time-honored practice of claiming that universal, objective agenthood is defined by whatever scale humans win on.
Well, nematodes might already feel more strongly. If you have a total of 302 neurons, and 15 of them signal “YUM!” when you bite into a really tasty protozoan, that might be pure bliss.
I’d bet against this at pretty extreme odds, if only there were some way to settle the bet.
I don’t think, in general, there could be a way to compare ‘strength of feeling’, etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism’s resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.
It seems plausible to me that there is more to ‘bliss’ than one’s level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it’s fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree.
Not that a machine couldn’t experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don’t think the nematode’s 302 neurons encode that.
Yes, I agree with you (and likely this was Eliezer’s point) that nematodes likely don’t have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to ‘bliss’. But this would be because their systems aren’t complex enough to have that particular feeling, not because they don’t feel strongly enough.
… A car’s gas gauge must feel very strongly that it either has enough gas or doesn’t have enough gas, but the feeling isn’t very interesting. (And I don’t mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a ‘feeling’.)
Going back and re-reading ciphergoth’s comment above, I now see why you’re emphasizing strength of feeling. What you said makes sense, point conceded.
I expect that, as we learn enough about neuroscience to begin to answer this, we’ll substitute “feels more strongly” with some other criteria on which humans come out definitively on top.
I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
Come, now, it’s hardly untestable. You can pay him if the FAI kills everyone to tile the universe with nematodes.
That seems doable, if you trick the AI into tearing apart a simulation before it figures out it’s in one.
But how do you test whether the AI weighted the nematodes so highly because their qualia are extra phenomenologically vivid, and not because their qualia are extra phenomenologically clipperiffic?
I suspect we’d have to know a lot more about neuroscience and consciousness to define “feel more strongly” precisely enough for the question to have an answer. I also suspect that, if the answer doesn’t come out the way we want it to, we’ll substitute another question in its place that does, in the time-honored practice of claiming that universal, objective agenthood is defined by whatever scale humans win on.
Do you really think that is at all likely that a nematode might be capable of feeling more informed life-satisfaction than a human?