We have to assume only that we will not significantly improve our understanding of what intelligence is without attempting to create it (through reverse engineering, coding, or EMs).
If our understanding remains incipient the safe policy is to assume that indeed intelligence is a capacity, or set of capacities that can be used to bootstrap itself. Given the 10¨52 lives at stake, even if we were fairly confident intelligence cannot bootstrap, we should still MaxiPok and act as if it was.
We have to assume only that we will not significantly improve our understanding of what intelligence is without attempting to create it
I disagree. By analogy, understanding of agriculture has increased greatly without the creation of an artificial photosynthetic cell. And yes, I know that photovoltic panels exist, but only a long time later.
Do you mind spelling out the analogy? (including where it breaks) I didn’t get it.
Reading my comment I feel compelled to clarify what I meant:
Katja asked: in which worlds should we worry about what ‘intelligence’ designates not being what we think it does?
I responded: in all the worlds where increasing our understanding of ‘intelligence’ has the side effect of increasing attempts to create it—due to feasibility, curiosity, or an urge for power. In these worlds, expanding our knowledge increases the expected risk, because of the side effects.
Whether intelligence is or not what we thought will only be found after the expected risk increased, then we find out the fact, and the risk either skyrockets or plummets. In hindsight, if it plummets, having learned more would look great. In hindsight, if it skyrockets, we are likely dead.
We have to assume only that we will not significantly improve our understanding of what intelligence is without attempting to create it (through reverse engineering, coding, or EMs). If our understanding remains incipient the safe policy is to assume that indeed intelligence is a capacity, or set of capacities that can be used to bootstrap itself. Given the 10¨52 lives at stake, even if we were fairly confident intelligence cannot bootstrap, we should still MaxiPok and act as if it was.
I disagree. By analogy, understanding of agriculture has increased greatly without the creation of an artificial photosynthetic cell. And yes, I know that photovoltic panels exist, but only a long time later.
Do you mind spelling out the analogy? (including where it breaks) I didn’t get it.
Reading my comment I feel compelled to clarify what I meant:
Katja asked: in which worlds should we worry about what ‘intelligence’ designates not being what we think it does?
I responded: in all the worlds where increasing our understanding of ‘intelligence’ has the side effect of increasing attempts to create it—due to feasibility, curiosity, or an urge for power. In these worlds, expanding our knowledge increases the expected risk, because of the side effects.
Whether intelligence is or not what we thought will only be found after the expected risk increased, then we find out the fact, and the risk either skyrockets or plummets. In hindsight, if it plummets, having learned more would look great. In hindsight, if it skyrockets, we are likely dead.