In that case, I would care about timelines insofar as there was significant uncertainty about the probability of takeoff on the order of 5 years. So I’d probably care a little about the difference between 10 years and 100 years, but still mostly not care about the difference between 30 years and 100 years.
It seems like your model is that we should be working in one of two modes:
Developing better alignment ideas
Implementing our current best alignment idea
However, in my model, there are a lot of alignment ideas which are only worth developing given certain timelines. [edit: Therefore, “you should be developing better alignment ideas anyway” is a very vague and questionably actionable strategy.]
To identify the crux here, would you care about timelines if it took five years to bring our best alignment idea to production?
In that case, I would care about timelines insofar as there was significant uncertainty about the probability of takeoff on the order of 5 years. So I’d probably care a little about the difference between 10 years and 100 years, but still mostly not care about the difference between 30 years and 100 years.
It seems like your model is that we should be working in one of two modes:
Developing better alignment ideas
Implementing our current best alignment idea
However, in my model, there are a lot of alignment ideas which are only worth developing given certain timelines. [edit: Therefore, “you should be developing better alignment ideas anyway” is a very vague and questionably actionable strategy.]
Do you believe this is the crux?
(Conversation continued in a different thread.)