The premise was intended to contextualize my personal experience of the issue. I did not intend to make a case that everyone should weigh their priorities in the same manner. For my brain specifically, a “hopeless” scenario registers as a Call to Arms where you simply need to drop what else you’re doing, and get to work. In this case, I calculated the age of my children on to all the timelines. I realized either my kids or my grandkids will die from AGI if Eliezer is in any way right. Even a 10% chance of that happening is too high for me, so I’ll pivot to whatever work needs to get done to avoid that. Even if the chance of my work making a difference are very slim, there isn’t anything else worth doing.
I agree with you actually. My point is that in fact you are implicitly discounting EY pessimism—for example, he didn’t release a timeline but often said “my timeline is way shorter than that” with respect to 30-years ones and I think 20-years ones as well. The way I read him, he thinks we personally are going to die from AGI, and our grandkids will never be born, with 90+% probability, and that the only chances to avoid it is that are either someone having a plan already three years ago which has been implemented in secret and will come to fruition next year, or some large out-of-context event happens (say, nuclear or biological war brings us back to the stone age).
My no-more-informed-than-yours opinion is that he’s wrong on several points, but correct on others. From this I deduce that the risk of very bad outcomes is real and not negligible, but the situation is not as desperate and there are probably actions that will improve our outlook significantly. Note that in the framework “either EY is right or he’s wrong and there’s nothing to worry about” there’s no useful action, only hope that he’s wrong because if he’s right we’re screwed anyway.
Implicitly, this is your world model as well from what you say. Discussing this then may look like nitpicking, but whether Yudkowsky or Ngo or Christiano are correct about possible scenarios changes a lot about which actions are plausibly helpful. Should we look for something that has a good chance to help in an “easier” scenario, rather than concentrate efforts on looking for solutions that work on the hardest scenario, given that the chance of finding one is negligible? Or would that be like looking for the keys under the streetlight?
I think we’re reflecting on the material at different depths. I can’t say I’m far enough along to assess who might be right about our prospects. My point was simply that telling someone with my type of brain “it’s hopeless, we’re all going to die” actually has the effect of me dropping whatever I’m doing, and applying myself to finding a solution anyway.
The premise was intended to contextualize my personal experience of the issue. I did not intend to make a case that everyone should weigh their priorities in the same manner. For my brain specifically, a “hopeless” scenario registers as a Call to Arms where you simply need to drop what else you’re doing, and get to work. In this case, I calculated the age of my children on to all the timelines. I realized either my kids or my grandkids will die from AGI if Eliezer is in any way right. Even a 10% chance of that happening is too high for me, so I’ll pivot to whatever work needs to get done to avoid that. Even if the chance of my work making a difference are very slim, there isn’t anything else worth doing.
I agree with you actually. My point is that in fact you are implicitly discounting EY pessimism—for example, he didn’t release a timeline but often said “my timeline is way shorter than that” with respect to 30-years ones and I think 20-years ones as well. The way I read him, he thinks we personally are going to die from AGI, and our grandkids will never be born, with 90+% probability, and that the only chances to avoid it is that are either someone having a plan already three years ago which has been implemented in secret and will come to fruition next year, or some large out-of-context event happens (say, nuclear or biological war brings us back to the stone age).
My no-more-informed-than-yours opinion is that he’s wrong on several points, but correct on others. From this I deduce that the risk of very bad outcomes is real and not negligible, but the situation is not as desperate and there are probably actions that will improve our outlook significantly. Note that in the framework “either EY is right or he’s wrong and there’s nothing to worry about” there’s no useful action, only hope that he’s wrong because if he’s right we’re screwed anyway.
Implicitly, this is your world model as well from what you say. Discussing this then may look like nitpicking, but whether Yudkowsky or Ngo or Christiano are correct about possible scenarios changes a lot about which actions are plausibly helpful. Should we look for something that has a good chance to help in an “easier” scenario, rather than concentrate efforts on looking for solutions that work on the hardest scenario, given that the chance of finding one is negligible? Or would that be like looking for the keys under the streetlight?
I think we’re reflecting on the material at different depths. I can’t say I’m far enough along to assess who might be right about our prospects. My point was simply that telling someone with my type of brain “it’s hopeless, we’re all going to die” actually has the effect of me dropping whatever I’m doing, and applying myself to finding a solution anyway.