Many factors are relevant to which possible futures you should upweight. For example, the following are all reasons to pay more attention to a possible set of futures (where a “possible set of futures” could be characterized by “AGI in 2050″ or any other condition):
They’re more likely
They’re more tractable
Because you see them more clearly (related: important events occur sooner, short-timelines)
Because other actors won’t be paying attention around important events (related: important events occur sooner, short-timelines)
Because you’ll have more influence in them
Because P(doom) is closer to 50%
(Also take into account future research– for example, if you focus on the world in 2030 (or assume that human-level AI is developed in 2030) you can be deferring, not neglecting, work on 2040.)
I sort of agree with this abstractly and disagree on practice. I think we’re just very limited in what kinds of circumstances we can reasonably estimate / guess at. Even the above claim, “a big proportion of worlds where we survived, AGI probably gets delayed” is hard to reason about.
But I do kind of need the know the timescale I’m operating in when thinking about health and money and skill investments, etc. so I think you need to reason about it somehow.
Neither
Many factors are relevant to which possible futures you should upweight. For example, the following are all reasons to pay more attention to a possible set of futures (where a “possible set of futures” could be characterized by “AGI in 2050″ or any other condition):
They’re more likely
They’re more tractable
Because you see them more clearly (related: important events occur sooner, short-timelines)
Because other actors won’t be paying attention around important events (related: important events occur sooner, short-timelines)
Because you’ll have more influence in them
Because P(doom) is closer to 50%
(Also take into account future research– for example, if you focus on the world in 2030 (or assume that human-level AI is developed in 2030) you can be deferring, not neglecting, work on 2040.)
I sort of agree with this abstractly and disagree on practice. I think we’re just very limited in what kinds of circumstances we can reasonably estimate / guess at. Even the above claim, “a big proportion of worlds where we survived, AGI probably gets delayed” is hard to reason about.
But I do kind of need the know the timescale I’m operating in when thinking about health and money and skill investments, etc. so I think you need to reason about it somehow.