I think all of your reasons for how a human comes to have moral authority boil down to something like having a belief that doing things that this authority says are expected to be good (have positive valence, in my current working theory of values). This perhaps gives a way of reframing alignment as the problem of constructing an agent to whom you would give moral authority to decide for you, rather than as we normally do as an agent that is value aligned.
I think all of your reasons for how a human comes to have moral authority boil down to something like having a belief that doing things that this authority says are expected to be good (have positive valence, in my current working theory of values). This perhaps gives a way of reframing alignment as the problem of constructing an agent to whom you would give moral authority to decide for you, rather than as we normally do as an agent that is value aligned.