In a sense you should be confused about qualia/TWAAFFTI, because we know next nothing about the subject. It might be the case that we “qualia” adds some extra level of confusiojn,...although it might alternatively be the case that TWAAFFTI is something that sounds like an explanation without being actually being an explanation. In particular, TWAAFFTI sets no constraints on what kind of algorithm would have morally relevant feelings, which reinforces my original point: if you think an embedded simulation of al human is morally relevant, how can you deny relevance to the host, even at times when it isnt simulating a human?
Maybe it would be clearer if we looked at some already existing maximization processes. Take for instance evolution. Evolution maximizes inclusive genetic fitness. You punish it by not donating sperm/eggs. I don’t care, because evolution is not a personlike thing.
I dislike the concept of qualia because it seems to me that it’s just a confusing name for “how inputs feel from the inside of an algorithm”.
In a sense you should be confused about qualia/TWAAFFTI, because we know next nothing about the subject. It might be the case that we “qualia” adds some extra level of confusiojn,...although it might alternatively be the case that TWAAFFTI is something that sounds like an explanation without being actually being an explanation. In particular, TWAAFFTI sets no constraints on what kind of algorithm would have morally relevant feelings, which reinforces my original point: if you think an embedded simulation of al human is morally relevant, how can you deny relevance to the host, even at times when it isnt simulating a human?
Maybe it would be clearer if we looked at some already existing maximization processes. Take for instance evolution. Evolution maximizes inclusive genetic fitness. You punish it by not donating sperm/eggs. I don’t care, because evolution is not a personlike thing.