That’s probably not what Page meant. On consideration, he would probably have clarified that AI that includes what we value about humanity would be a worthy successor. He probably wasn’t even clear on his own philosophy at the time.
I don’t see reasons to be so confident in this optimism. If I recall correctly, Robin Hanson explicitly believes that putting any constraints on future forms of life, including on its values, is undesirable/bad/regressive, even though lack of such constraints would eventually lead to a future with no trace of humanity left. Similar for Beef Jezos and other hardcore e/acc: they believe that a worthy future involves making a number go up, a number that corresponds to some abstract quantity like “entropy” or “complexity of life” or something, and that if making it go up involves humanity going extinct, too bad for humanity.
Which is to say: there are existence proofs that people with such beliefs can exist, and can retain these beliefs across many years and in the face of what’s currently happening.
I can readily believe that Larry Page is also like this.
I’m not familiar with the details of Robin’s beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He’s spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.
I don’t see reasons to be so confident in this optimism. If I recall correctly, Robin Hanson explicitly believes that putting any constraints on future forms of life, including on its values, is undesirable/bad/regressive, even though lack of such constraints would eventually lead to a future with no trace of humanity left. Similar for Beef Jezos and other hardcore e/acc: they believe that a worthy future involves making a number go up, a number that corresponds to some abstract quantity like “entropy” or “complexity of life” or something, and that if making it go up involves humanity going extinct, too bad for humanity.
Which is to say: there are existence proofs that people with such beliefs can exist, and can retain these beliefs across many years and in the face of what’s currently happening.
I can readily believe that Larry Page is also like this.
I’m not familiar with the details of Robin’s beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He’s spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.