Another way to look at this is as a result of “ought implies can”. A function mapping the correct choices for each possible action is not guaranteed to have correct/desirable outcomes for no-longer-impossible inputs.
Also, this brings forth a confusion in the word “aligned”. If it’s foundationally aligned with underlying values, then increased capacity doesn’t change anything. If it’s only aligned in the specific cases that were envisioned, then increase in capacity may or may not remain aligned. Definitionally, if r is aligned only for some cases, then it’s not aligned for all cases. So unpack another level: why would you not make r align generally, rather than only for current capacities?
Another way to look at this is as a result of “ought implies can”. A function mapping the correct choices for each possible action is not guaranteed to have correct/desirable outcomes for no-longer-impossible inputs.
Also, this brings forth a confusion in the word “aligned”. If it’s foundationally aligned with underlying values, then increased capacity doesn’t change anything. If it’s only aligned in the specific cases that were envisioned, then increase in capacity may or may not remain aligned. Definitionally, if r is aligned only for some cases, then it’s not aligned for all cases. So unpack another level: why would you not make r align generally, rather than only for current capacities?