I think that Eliezer, at least, uses the term “alignment” solely to refer to what you call “aimability.” Eliezer believes that most of the difficulty in getting an ASI to do good things lies in “aimability” rather than “goalcraft.” That is, getting an ASI to do anything, such as “create two molecularly identical strawberries on a plate,” is the hard part, while deciding what specific thing it should do is significantly easier.
That being said, you’re right that there are a lot of people who use the term differently from how Eliezer uses it.
deciding what specific thing it should do is significantly easier
If the initial specific thing is pivotal processes that end the acute risk period, it doesn’t matter if the goodness-optimizing goalcraft is impossibly hard to figure out, since we’ll have time to figure it out.
I think that Eliezer, at least, uses the term “alignment” solely to refer to what you call “aimability.” Eliezer believes that most of the difficulty in getting an ASI to do good things lies in “aimability” rather than “goalcraft.” That is, getting an ASI to do anything, such as “create two molecularly identical strawberries on a plate,” is the hard part, while deciding what specific thing it should do is significantly easier.
That being said, you’re right that there are a lot of people who use the term differently from how Eliezer uses it.
If the initial specific thing is pivotal processes that end the acute risk period, it doesn’t matter if the goodness-optimizing goalcraft is impossibly hard to figure out, since we’ll have time to figure it out.