That being said, if we can have an AGI that is only aligned to what we want now, it would already be a huge win. [...] Solving philosophy: This is a great-to-have but the implications for not solving philosophy does not seem catastrophic.
I tried to argue the opposite in the following posts. I’m curious if you’ve seen them and still disagree with my position.
Your position makes sense. Part of it was just paraphrasing (what seems to me as) the ‘consensus view’ that preventing AIs from wiping us out is much more urgent / important than preventing AIs from keeping us alive in a far-from-ideal state.
I tried to argue the opposite in the following posts. I’m curious if you’ve seen them and still disagree with my position.
Two Neglected Problems in Human-AI Safety
Beyond Astronomical Waste
Morality is Scary
Your position makes sense. Part of it was just paraphrasing (what seems to me as) the ‘consensus view’ that preventing AIs from wiping us out is much more urgent / important than preventing AIs from keeping us alive in a far-from-ideal state.