Then you have screwed up your alignment. Any time your woried that the AI will make the wrong decision, rather than that the AI will make the right decision, but you want to have made that decision, then your worried about an alignment failure. Now maybe it’s easier to make a hands off AI than a perfectly aligned AI. An AI that asks us it’s moral dilemmas because we can’t figure out how to make an AI choose correctly on it’s own.
But this is a “best we could do given imperfect alignment”. It isn’t the highest form of AI to aim for.
Problem is, our alignment is glitchy too. We are wired to keep running for the carrot that we will never be able to have for long. Because we will also strive for me. But AI can just teleport us to the “maximum carrot” point. Meanwhile, what we really need is not the destination, but the journey. At least, that’s what I believe into. Sadly, not much people understand/agree with it.
Then you have screwed up your alignment. Any time your woried that the AI will make the wrong decision, rather than that the AI will make the right decision, but you want to have made that decision, then your worried about an alignment failure. Now maybe it’s easier to make a hands off AI than a perfectly aligned AI. An AI that asks us it’s moral dilemmas because we can’t figure out how to make an AI choose correctly on it’s own.
But this is a “best we could do given imperfect alignment”. It isn’t the highest form of AI to aim for.
Problem is, our alignment is glitchy too. We are wired to keep running for the carrot that we will never be able to have for long. Because we will also strive for me. But AI can just teleport us to the “maximum carrot” point. Meanwhile, what we really need is not the destination, but the journey. At least, that’s what I believe into. Sadly, not much people understand/agree with it.