Strictly improving intelligent and rational lifeforms will over time converge into the same (non distinguishable) beings (kind of like a generalized form of “Great minds think alike”).
This assumption doesn’t seem true. While knowledge will converge as result of improving epistemic rationality, goals will not, and for agent to change its terminal goals is in most cases irrational, since it won’t reach the currently intended goals.
The goals can be seen as vectors in a high-dimensional space; and if humanity goals vector and AI goals vector are different, then this difference, no matter how small, will be critical at high power.
everything good about humanity is good in itself
Veto “good in itself”; do you mean “valuable by currently existing civilization”?
This assumption doesn’t seem true. While knowledge will converge as result of improving epistemic rationality, goals will not, and for agent to change its terminal goals is in most cases irrational, since it won’t reach the currently intended goals.
The goals can be seen as vectors in a high-dimensional space; and if humanity goals vector and AI goals vector are different, then this difference, no matter how small, will be critical at high power.
Veto “good in itself”; do you mean “valuable by currently existing civilization”?