Most people I’ve talked to have one or two world changing schemes that they want to implement. This might be selection bias, though.
It is not at all obvious to me that any optimizer would be personlike. Sure, it would be possible (maybe even easy!) to build a personlike AI, but I’m not sure it would “necessarily” happen. So I don’t know if those problems would be there for an arbitrary AI, but I do know that they would be there for its models of humans.
In a sense you should be confused about qualia/TWAAFFTI, because we know next nothing about the subject. It might be the case that we “qualia” adds some extra level of confusiojn,...although it might alternatively be the case that TWAAFFTI is something that sounds like an explanation without being actually being an explanation. In particular, TWAAFFTI sets no constraints on what kind of algorithm would have morally relevant feelings, which reinforces my original point: if you think an embedded simulation of al human is morally relevant, how can you deny relevance to the host, even at times when it isnt simulating a human?
Maybe it would be clearer if we looked at some already existing maximization processes. Take for instance evolution. Evolution maximizes inclusive genetic fitness. You punish it by not donating sperm/eggs. I don’t care, because evolution is not a personlike thing.
Most people I’ve talked to have one or two world changing schemes that they want to implement. This might be selection bias, though.
It is not at all obvious to me that any optimizer would be personlike. Sure, it would be possible (maybe even easy!) to build a personlike AI, but I’m not sure it would “necessarily” happen. So I don’t know if those problems would be there for an arbitrary AI, but I do know that they would be there for its models of humans.
It is not at all obvious to me that being personlike is necessary to have qualia at all, for all that might be necessary for having personlike qualia.
I dislike the concept of qualia because it seems to me that it’s just a confusing name for “how inputs feel from the inside of an algorithm”.
In a sense you should be confused about qualia/TWAAFFTI, because we know next nothing about the subject. It might be the case that we “qualia” adds some extra level of confusiojn,...although it might alternatively be the case that TWAAFFTI is something that sounds like an explanation without being actually being an explanation. In particular, TWAAFFTI sets no constraints on what kind of algorithm would have morally relevant feelings, which reinforces my original point: if you think an embedded simulation of al human is morally relevant, how can you deny relevance to the host, even at times when it isnt simulating a human?
Maybe it would be clearer if we looked at some already existing maximization processes. Take for instance evolution. Evolution maximizes inclusive genetic fitness. You punish it by not donating sperm/eggs. I don’t care, because evolution is not a personlike thing.