Thanks, that’s interesting, though mostly I’m not buying it (still unclear whether there’s a good case to be made; fairly clear that he’s not making a good case). Thoughts:
Most of it seems to say “Being a subroutine doesn’t imply something doesn’t suffer”. That’s fine, but few positive arguments are made. Starting with the letter ‘h’ doesn’t imply something doesn’t suffer either—but it’d be strange to say “Humans obviously suffer, so why not houses, hills and hiccups?”.
We infer preference from experience of suffering/joy...: [Joe Xs when he might not X] & [Joe experiences suffering and joy] → [Joe prefers Xing] [this rock is Xing] → [this rock Xs] Methinks someone is petitioing a principii. (Joe is mechanistic too—but the suffering/joy being part of that mechanism is what gets us to call it “preference”)
Too much is conflated: [Happeningtox]≢[Aimingtox]≢[Preferringtox] In particular, I can aim to x and not care whether I succeed. Not achieving an aim doesn’t imply frustration or suffering in general—we just happen to be wired that way (but it’s not universal, even for humans: we can try something whimsical-yet-goal-directed, and experience no suffering/frustration when it doesn’t work). [taboo/disambiguate ‘aim’ if necessary]
There’s no argument made for frustration/satisfaction. It’s just assumed that not achieving a goal is frustrating, and that achieving one is satisfying. A case can be made to ascribe intentionality to many systems—e.g. Dennett’s intentional stance. Ascribing welfare is a further step, and requires further arguments. Non-achievement of an aim isn’t inherently frustrating (c.f. Buddhists—and indeed current robots).
The only argument I saw on this was “we can sum over possible interpretations”—sure, but I can do that for hiccups too.
Thanks, that’s interesting, though mostly I’m not buying it (still unclear whether there’s a good case to be made; fairly clear that he’s not making a good case).
Thoughts:
Most of it seems to say “Being a subroutine doesn’t imply something doesn’t suffer”. That’s fine, but few positive arguments are made. Starting with the letter ‘h’ doesn’t imply something doesn’t suffer either—but it’d be strange to say “Humans obviously suffer, so why not houses, hills and hiccups?”.
We infer preference from experience of suffering/joy...:
[Joe Xs when he might not X] & [Joe experiences suffering and joy] → [Joe prefers Xing]
[this rock is Xing] → [this rock Xs]
Methinks someone is petitioing a principii.
(Joe is mechanistic too—but the suffering/joy being part of that mechanism is what gets us to call it “preference”)
Too much is conflated:
[Happening to x]≢[Aiming to x]≢[Preferring to x]
In particular, I can aim to x and not care whether I succeed. Not achieving an aim doesn’t imply frustration or suffering in general—we just happen to be wired that way (but it’s not universal, even for humans: we can try something whimsical-yet-goal-directed, and experience no suffering/frustration when it doesn’t work). [taboo/disambiguate ‘aim’ if necessary]
There’s no argument made for frustration/satisfaction. It’s just assumed that not achieving a goal is frustrating, and that achieving one is satisfying. A case can be made to ascribe intentionality to many systems—e.g. Dennett’s intentional stance. Ascribing welfare is a further step, and requires further arguments.
Non-achievement of an aim isn’t inherently frustrating (c.f. Buddhists—and indeed current robots).
The only argument I saw on this was “we can sum over possible interpretations”—sure, but I can do that for hiccups too.