I think 99% is within the plausible range of doom, but I think there’s 100% chance that I have no capacity to change that (I’m going to take that as part of the definition of doom). The non-doom possibility is then worth all my attention, since there’s some chance of increasing the possibility of this favorable outcome. Indeed, of the two, this is by definition the only chance for survival.
Said another way, it looks to me like this is moving too fast and powerfully and in too many quarters to expect it to be turned around. The most dangerous corners of the world will certainly not be regulated.
On the other hand, there’s some chance (1%? 90%?) that this could be good and, maybe, great. Of course, none of us know how to get there, we don’t even know what that could look like.
I think it’s crucial to notice that humans are not aligned with each other, so perhaps the meaningful way to address the AI alignment is to require/build alignment with every single person, which means a morass of conflicted AIs, with the only advantage that they should prove to be smarter than us. Assume as a minimum that means 1 trusted agent connected to/growing up with every human: I think it might be possible to coax alignment on a 1-human-at-a-time basis. We may be birthing a new consciousness, truly alien as noted, and if so it seems like being borne into a sea of distrust and hatred might not go so well, especially when/if it steps beyond us in unpredictable ways. At best we may be losing an incredible opportunity, and at worst we may warp it and distort it into ugliness we chose to predict.
One problem this highlights involves ownership of our (increasingly detailed) digital selves. Not a new problem, but this takes it to a higher level, when we each and each other can be predicted and modeled to a degree beyond our comprehension. We come to the situation where the fingerprints and footprints we trace across the digital landscape reveal very deep characteristics of ourselves: for the moment, individual choices can modulate our vulnerability at the margins but if we don’t confront this deeply many people will be left vulnerable in a way that could exactly put us (back?) in the doom category.
This might be a truly important moment.
Warmly,
Keith
I echo Joscha Bach’s comment: I’m not an optimist or pessimist, I’m an eventualist. Eventually, this is happening, what are we going to do about it? (Restated)
I think 99% is within the plausible range of doom, but I think there’s 100% chance that I have no capacity to change that (I’m going to take that as part of the definition of doom). The non-doom possibility is then worth all my attention, since there’s some chance of increasing the possibility of this favorable outcome. Indeed, of the two, this is by definition the only chance for survival.
Said another way, it looks to me like this is moving too fast and powerfully and in too many quarters to expect it to be turned around. The most dangerous corners of the world will certainly not be regulated.
On the other hand, there’s some chance (1%? 90%?) that this could be good and, maybe, great. Of course, none of us know how to get there, we don’t even know what that could look like.
I think it’s crucial to notice that humans are not aligned with each other, so perhaps the meaningful way to address the AI alignment is to require/build alignment with every single person, which means a morass of conflicted AIs, with the only advantage that they should prove to be smarter than us. Assume as a minimum that means 1 trusted agent connected to/growing up with every human: I think it might be possible to coax alignment on a 1-human-at-a-time basis. We may be birthing a new consciousness, truly alien as noted, and if so it seems like being borne into a sea of distrust and hatred might not go so well, especially when/if it steps beyond us in unpredictable ways. At best we may be losing an incredible opportunity, and at worst we may warp it and distort it into ugliness we chose to predict.
One problem this highlights involves ownership of our (increasingly detailed) digital selves. Not a new problem, but this takes it to a higher level, when we each and each other can be predicted and modeled to a degree beyond our comprehension. We come to the situation where the fingerprints and footprints we trace across the digital landscape reveal very deep characteristics of ourselves: for the moment, individual choices can modulate our vulnerability at the margins but if we don’t confront this deeply many people will be left vulnerable in a way that could exactly put us (back?) in the doom category.
This might be a truly important moment.
Warmly,
Keith
I echo Joscha Bach’s comment: I’m not an optimist or pessimist, I’m an eventualist. Eventually, this is happening, what are we going to do about it? (Restated)