I expect most “artificial” conscious experiences created by machines to be neutral with respect to the pain-pleasure axis, for the same reason that randomly generated bitmaps rarely depict anything.
What if the machine is an AGI algorithm, and right now it’s autonomously inventing a new better airplane design? Would you still expect that?
The space of possible minds/algorithms is so vast, and that problem is so open-ended, that it would be a remarkable coincidence if such an AGI had a consciousness that was anything like ours. Most details of our experience are just accidents of evolution and history.
Does an airplane have a consciousness like a bird? “Design an airplane” sounds like a more specific goal, but in the space of all possible minds/algorithms that goal’s solutions are quite undetermined, just like flight.
My airplane comment above was a sincere question, not a gotcha or argument or anything. I was a bit confused about what you were saying and was trying to suss it out. :) Thanks.
I do disagree with you though. Hmm, here’s an argument. Humans invented TD learning, and then it was discovered that human brains (and other animals) incorporate TD learning too. Similarly, self-supervised learning is widely used in both AI and human brains, as are distributed representations and numerous other things.
If our expectation is “The space of possible minds/algorithms is so vast…” then it would be a remarkable coincidence for TD learning to show up independently in brains & AI, right? How would you explain that?
I would propose instead an alternative picture, in which there are a small number of practical methods which can build intelligent systems. In that picture (which I subscribe to, more or less), we shouldn’t be too surprised if future AGI has a similar architecture to the human brain. Or in the most extreme version of that picture, we should be surprised if it doesn’t! (At least, they’d be similar in terms of how they use RL and other types of learning / inference algorithms; I don’t expect the innate drives a.k.a. reward functions to be remotely the same, at least not by default.)
I agree with Stephen’s point about convergent results from directed design (or evolution in the case of animals). I don’t agree that consciousness and moral valence are closely coupled such that it would incur a performance loss to decouple them. Therefore, I suspect it will be a nearly costless choice to make morally relevant vs irrelevant AGI, and that we very much morally ought to choose to make morally-irrelevant AGI. To do otherwise would be possible, as Gunnar describes, but morally monstrous. Unfortunately some people do morally monstrous things sometimes. I am unclear on how to prevent this particular form of monstrosity.
What if the machine is an AGI algorithm, and right now it’s autonomously inventing a new better airplane design? Would you still expect that?
The space of possible minds/algorithms is so vast, and that problem is so open-ended, that it would be a remarkable coincidence if such an AGI had a consciousness that was anything like ours. Most details of our experience are just accidents of evolution and history.
Does an airplane have a consciousness like a bird? “Design an airplane” sounds like a more specific goal, but in the space of all possible minds/algorithms that goal’s solutions are quite undetermined, just like flight.
My airplane comment above was a sincere question, not a gotcha or argument or anything. I was a bit confused about what you were saying and was trying to suss it out. :) Thanks.
I do disagree with you though. Hmm, here’s an argument. Humans invented TD learning, and then it was discovered that human brains (and other animals) incorporate TD learning too. Similarly, self-supervised learning is widely used in both AI and human brains, as are distributed representations and numerous other things.
If our expectation is “The space of possible minds/algorithms is so vast…” then it would be a remarkable coincidence for TD learning to show up independently in brains & AI, right? How would you explain that?
I would propose instead an alternative picture, in which there are a small number of practical methods which can build intelligent systems. In that picture (which I subscribe to, more or less), we shouldn’t be too surprised if future AGI has a similar architecture to the human brain. Or in the most extreme version of that picture, we should be surprised if it doesn’t! (At least, they’d be similar in terms of how they use RL and other types of learning / inference algorithms; I don’t expect the innate drives a.k.a. reward functions to be remotely the same, at least not by default.)
I agree with Stephen’s point about convergent results from directed design (or evolution in the case of animals). I don’t agree that consciousness and moral valence are closely coupled such that it would incur a performance loss to decouple them. Therefore, I suspect it will be a nearly costless choice to make morally relevant vs irrelevant AGI, and that we very much morally ought to choose to make morally-irrelevant AGI. To do otherwise would be possible, as Gunnar describes, but morally monstrous. Unfortunately some people do morally monstrous things sometimes. I am unclear on how to prevent this particular form of monstrosity.