I kind of agree with this, and in this way is where I fundamentally differ from a lot of e/accs and AI progress boosters quite a lot.
However, I think 2 things matter here that limit the force of this, though I don’t know to what extent:
People have pretty different values, and while I mostly don’t consider it a bottleneck to alignment as understood on LW, it does impact this post specifically because there are differences in what people consider the best future, and this is why I’m unsure that we should pursue your program specifically.
I think there are semi-reasonable arguments that lock-in concerns are somewhat overstated, and while I don’t totally buy them, they are at least somewhat reasonable, and thus I don’t fully support the post at this time.
However, this post has a lot of food for thought, especially given my world model of AI development is notably skewed more towards optimistic outcomes than most of LW by a lot, so thank you for at least trying to argue for a slow down without assuming existential risk.
I kind of agree with this, and in this way is where I fundamentally differ from a lot of e/accs and AI progress boosters quite a lot.
However, I think 2 things matter here that limit the force of this, though I don’t know to what extent:
People have pretty different values, and while I mostly don’t consider it a bottleneck to alignment as understood on LW, it does impact this post specifically because there are differences in what people consider the best future, and this is why I’m unsure that we should pursue your program specifically.
I think there are semi-reasonable arguments that lock-in concerns are somewhat overstated, and while I don’t totally buy them, they are at least somewhat reasonable, and thus I don’t fully support the post at this time.
However, this post has a lot of food for thought, especially given my world model of AI development is notably skewed more towards optimistic outcomes than most of LW by a lot, so thank you for at least trying to argue for a slow down without assuming existential risk.