If I accept all of your suppositions, your conclusion doesn’t seem particularly difficult to accept. Sure, after doing all the prep work you describe, executing a conscious experience (albeit an entirely static, non-environment-dependent one) requires a single operation… even the conscious experience of suffering. As does any other computation you might ever wish to perform, no matter how complicated.
That said, your suppositions do strike me as revealing some confusion between an isolated conscious experience (whatever that is) and the moral standing of a system (whatever that is).
Well, this post heavily hints that a system’s moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don’t necessarily correspond to what’s actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.
Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn’t there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn’t the same argument apply to any criteria given for moral standing in a deterministic program?
It’s not clear to me that whether a system is conscious (whatever that means) and whether it’s capable of a single conscious experience (whatever that means) are the same thing.
If I accept all of your suppositions, your conclusion doesn’t seem particularly difficult to accept. Sure, after doing all the prep work you describe, executing a conscious experience (albeit an entirely static, non-environment-dependent one) requires a single operation… even the conscious experience of suffering. As does any other computation you might ever wish to perform, no matter how complicated.
That said, your suppositions do strike me as revealing some confusion between an isolated conscious experience (whatever that is) and the moral standing of a system (whatever that is).
Well, this post heavily hints that a system’s moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don’t necessarily correspond to what’s actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.
Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn’t there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn’t the same argument apply to any criteria given for moral standing in a deterministic program?
It’s not clear to me that whether a system is conscious (whatever that means) and whether it’s capable of a single conscious experience (whatever that means) are the same thing.