Do you mind spelling out the analogy? (including where it breaks) I didn’t get it.
Reading my comment I feel compelled to clarify what I meant:
Katja asked: in which worlds should we worry about what ‘intelligence’ designates not being what we think it does?
I responded: in all the worlds where increasing our understanding of ‘intelligence’ has the side effect of increasing attempts to create it—due to feasibility, curiosity, or an urge for power. In these worlds, expanding our knowledge increases the expected risk, because of the side effects.
Whether intelligence is or not what we thought will only be found after the expected risk increased, then we find out the fact, and the risk either skyrockets or plummets. In hindsight, if it plummets, having learned more would look great. In hindsight, if it skyrockets, we are likely dead.
Do you mind spelling out the analogy? (including where it breaks) I didn’t get it.
Reading my comment I feel compelled to clarify what I meant:
Katja asked: in which worlds should we worry about what ‘intelligence’ designates not being what we think it does?
I responded: in all the worlds where increasing our understanding of ‘intelligence’ has the side effect of increasing attempts to create it—due to feasibility, curiosity, or an urge for power. In these worlds, expanding our knowledge increases the expected risk, because of the side effects.
Whether intelligence is or not what we thought will only be found after the expected risk increased, then we find out the fact, and the risk either skyrockets or plummets. In hindsight, if it plummets, having learned more would look great. In hindsight, if it skyrockets, we are likely dead.