My reading of this post ended up with less understanding of what the analogy the author was trying to use than at the start.
It’s fairly common and useful in forming or extending analogies to point out which parts in one domain are similar to parts in the other domain, and why you think that similar consequences would follow. The original rocket alignment post was bad enough in that respect, but at least some correspondences could be inferred and the conclusion deduced (whether agreed with or not).
I have absolutely no idea what parts in this extension post correspond to what in the actual alignment problem, nor why there should be any expectation of similar models, nor what conclusions the analogy is even trying to point toward.
My reading of this post ended up with less understanding of what the analogy the author was trying to use than at the start.
It’s fairly common and useful in forming or extending analogies to point out which parts in one domain are similar to parts in the other domain, and why you think that similar consequences would follow. The original rocket alignment post was bad enough in that respect, but at least some correspondences could be inferred and the conclusion deduced (whether agreed with or not).
I have absolutely no idea what parts in this extension post correspond to what in the actual alignment problem, nor why there should be any expectation of similar models, nor what conclusions the analogy is even trying to point toward.