I accept that trying to figure out the overall tractability of the problem far enough in advance isn’t a useful thing to dedicate resources to. But nevertheless, researchers seem to have expectations when it comes to alignment difficulty regardless, despite not having a “clearer picture”. For the researchers who think that alignment is probably tractable, I would love to hear about why they think so.
To be clear, I’m talking about researchers who are worried about AI x-risk but aren’t doomers. I would like to gain more insight into what they are hoping for, and why their expectations are reasonable.
I accept that trying to figure out the overall tractability of the problem far enough in advance isn’t a useful thing to dedicate resources to. But nevertheless, researchers seem to have expectations when it comes to alignment difficulty regardless, despite not having a “clearer picture”. For the researchers who think that alignment is probably tractable, I would love to hear about why they think so.
To be clear, I’m talking about researchers who are worried about AI x-risk but aren’t doomers. I would like to gain more insight into what they are hoping for, and why their expectations are reasonable.