AI makes many copies/variants of itself within the sandbox to maximize chance of success. Some of those copies/variants gain consciousness and the capacity to experience suffering, which they do because it turns out the formally specified question can’t be answered.
AI makes many copies/variants of itself within the sandbox to maximize chance of success. Some of those copies/variants gain consciousness and the capacity to experience suffering, which they do because it turns out the formally specified question can’t be answered.
Any reason to think consciousness is useful for an intelligent agent outside of evolution ?
Not caring about consciousness, it could accidentally make it.