Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.
Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.