Is a human mind the simplest possible mind that can be sentient? What if, in the course of trying to model its own programmers, a relatively younger AI manages to create a sentient simulation trapped within itself? How soon do you have to start worrying? Ask yourself that fundamental question, “What do I think I know, and how do I think I know it?”
I read this as being simpler than a real human mind. Since it’s simpler, the abstractions used are going to be imperfect, and the design would end up being something that is in some way artificial. It’s not as explicit as I said, but I still think the implication is pretty strong.
I’ve actually lost track of how this impacts my original point. As stated, it was that we’re worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.
I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you’ve convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.
My point is he’s clearly not drawing a box tightly around what’s human or not. If he’s concerned with clearly-sub-human AI, then he’s casting a significantly wider net than it seems you’re assuming he is. And considering that he’s written extensively on the variety of mind-space, assuming he’s taking a tightly parochial view is poorly founded.
What? That’s completely irrelevant to the question at hand.
By considering the question of whether simpler-than-human minds are possible in this context, it’s clear that Eliezer was thinking about the question and giving them moral weight. He doesn’t need to ANSWER the question I was posing to make that much clear.
I read this as being simpler than a real human mind. Since it’s simpler, the abstractions used are going to be imperfect, and the design would end up being something that is in some way artificial. It’s not as explicit as I said, but I still think the implication is pretty strong.
I’ve actually lost track of how this impacts my original point. As stated, it was that we’re worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.
I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you’ve convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.
My point is he’s clearly not drawing a box tightly around what’s human or not. If he’s concerned with clearly-sub-human AI, then he’s casting a significantly wider net than it seems you’re assuming he is. And considering that he’s written extensively on the variety of mind-space, assuming he’s taking a tightly parochial view is poorly founded.
“Is a human mind the simplest possible mind?”
“But if it was simpler, it wouldn’t be human!”
Downvoted.
What? That’s completely irrelevant to the question at hand.
By considering the question of whether simpler-than-human minds are possible in this context, it’s clear that Eliezer was thinking about the question and giving them moral weight. He doesn’t need to ANSWER the question I was posing to make that much clear.
Wait, what?
*Clicks “Show more comments above.”
Oops. I thought you were replying to the quoted text. Upvoted and retracted my comment.