My impression is that one point Hanson was making in the spring-summer 2023 podcasts is that some major issues with AI risk don’t seem different in kind from cultural value drift that’s already familiar to us. There are obvious disanalogies, but my understanding of this point is that there is still a strong analogy that people avoid acknowledging.
If human value drift was already understood as a serious issue, the analogy would seem reasonable, since AI risk wouldn’t need to involve more than the normal kind of cultural value drift compressed into short timelines and allowed to exceed the bounds from biological human nature. But instead there is perceived safety to human value drift, so the argument sounds like it’s asking to transport that perceived safety via the analogy over to AI risk, and there is much arguing on this point without questioning the perceived safety of human value drift. So I think what makes the analogy valuable is instead transporting the perceived danger of AI risk over to the human value drift side, giving another point of view on human value drift, one that makes the problem easier to see.
My impression is that one point Hanson was making in the spring-summer 2023 podcasts is that some major issues with AI risk don’t seem different in kind from cultural value drift that’s already familiar to us. There are obvious disanalogies, but my understanding of this point is that there is still a strong analogy that people avoid acknowledging.
If human value drift was already understood as a serious issue, the analogy would seem reasonable, since AI risk wouldn’t need to involve more than the normal kind of cultural value drift compressed into short timelines and allowed to exceed the bounds from biological human nature. But instead there is perceived safety to human value drift, so the argument sounds like it’s asking to transport that perceived safety via the analogy over to AI risk, and there is much arguing on this point without questioning the perceived safety of human value drift. So I think what makes the analogy valuable is instead transporting the perceived danger of AI risk over to the human value drift side, giving another point of view on human value drift, one that makes the problem easier to see.