Unless the conscious algorithm in question will experience states that are not valence-neutral, I see no issue with creating or destroying instances of it. The same applies to any other type of consciousness. It seems implausible to me that any of our known AI architectures could instantiate such non-neutral valences, even if they do seem plausibly able to instantiate other kinds of experiences (e.g. geometric impressions).
I’m not particularly worried that we may harm AIs that do not have valenced states, at least in the near term. The issue is more over precedent and expectations going forward. I would worry about a future in which we create and destroy conscious systems willy-nilly because of how it might affect our understanding of our relationship to them, and ultimately to how we act toward AIs that do have morally relevant states. These worries are nebulous, and I very well might be wrong to be so concerned, but it feels risky to rush into things.
Unless the conscious algorithm in question will experience states that are not valence-neutral, I see no issue with creating or destroying instances of it. The same applies to any other type of consciousness. It seems implausible to me that any of our known AI architectures could instantiate such non-neutral valences, even if they do seem plausibly able to instantiate other kinds of experiences (e.g. geometric impressions).
I’m not particularly worried that we may harm AIs that do not have valenced states, at least in the near term. The issue is more over precedent and expectations going forward. I would worry about a future in which we create and destroy conscious systems willy-nilly because of how it might affect our understanding of our relationship to them, and ultimately to how we act toward AIs that do have morally relevant states. These worries are nebulous, and I very well might be wrong to be so concerned, but it feels risky to rush into things.