What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?
What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?