The trouble comes in when we start putting a utility on pleasure and pain. For example lets say you were given a programmatic description of a less than 100% faithful simulation of a human and asked to assess whether it would have (or reported it had) pain, without you running it.
Your answer would determine whether it was used in a world simulation.
I’m just identifying the problem. I have no preferred solution at this point.
ETA: Altering physics is one possible solution. I’d wait on proposing a change to physics until we have a more concrete theory of intelligence and how human type systems are built. I think we can still push computers to be more like human-style systems. So I’m reserving judgement until we have those types of systems.
The trouble comes in when we start putting a utility on pleasure and pain. For example lets say you were given a programmatic description of a less than 100% faithful simulation of a human and asked to assess whether it would have (or reported it had) pain, without you running it.
Your answer would determine whether it was used in a world simulation.
Proposing a change in physics to make your utility function more intuitive seems like a serious mis-step to me.
I’m just identifying the problem. I have no preferred solution at this point.
ETA: Altering physics is one possible solution. I’d wait on proposing a change to physics until we have a more concrete theory of intelligence and how human type systems are built. I think we can still push computers to be more like human-style systems. So I’m reserving judgement until we have those types of systems.