I agree with Eliezer that acausal trade/extortion between humans and AIs probably doesn’t work, but I’m pretty worried about what happens after AI is developed, whether aligned or unaligned/misaligned, because then the “acausal trade/extortion between humans and AIs probably doesn’t work” argument would no longer apply.
I think fully understanding the issue requires solving some philosophical problems that we probably won’t solve in the near future (unless with help of superintelligence), so it contributes to me wanting to:
preserve and improve the collective philosophical competence of our civilization, such that when it becomes possible to pursue strategies like ones listed above, we’ll be able to make the right decisions. The best opportunity to do this that I can foresee is the advent of advanced AI, which is another reason I want to push for AIs that are not just value aligned with us, but also have philosophical competence that scales with their other intellectual abilities, so they can help correct the philosophical errors of their human users (instead of merely deferring to them), thereby greatly improving our collective philosophical competence.
(Not sure if you should include this in your post. I guess I would only point people in this direction if I thought they would make a positive contribution to solving the problem.)
Yeah I’ve been a bit confused about whether to include in the post “I do think there are legitimate interesting ways to improve human frontier of understanding acausal trade”, but I think if you’re currently anxious/distressed in the way this post is anticipating, it’s unlikely to be a useful nearterm goal to be able to contribute to that.
i.e. something like, if you’ve recently broken up with someone and really want to text your ex at 2am… like, it’s not never a good idea to text your ex, but, probably the point where it’s actually a good idea is when you’ve stopped wanting it so badly.
I agree with Eliezer that acausal trade/extortion between humans and AIs probably doesn’t work, but I’m pretty worried about what happens after AI is developed, whether aligned or unaligned/misaligned, because then the “acausal trade/extortion between humans and AIs probably doesn’t work” argument would no longer apply.
I think fully understanding the issue requires solving some philosophical problems that we probably won’t solve in the near future (unless with help of superintelligence), so it contributes to me wanting to:
(Not sure if you should include this in your post. I guess I would only point people in this direction if I thought they would make a positive contribution to solving the problem.)
Yeah I’ve been a bit confused about whether to include in the post “I do think there are legitimate interesting ways to improve human frontier of understanding acausal trade”, but I think if you’re currently anxious/distressed in the way this post is anticipating, it’s unlikely to be a useful nearterm goal to be able to contribute to that.
i.e. something like, if you’ve recently broken up with someone and really want to text your ex at 2am… like, it’s not never a good idea to text your ex, but, probably the point where it’s actually a good idea is when you’ve stopped wanting it so badly.