Do others agree with the pattern? Do you also see it as a problem?
Yes. No.
I don’t think it’s a problem for a couple reasons:
my AI timelines are short enough that it’s not going to become very pressing
if it does become a pressing problem it will be solved by a new generation of folks who will solve it themselves better than we did because they’ll live in the culture we affected (cf. the Reformation → the Enlightenment → Victornian-era Science → General Semantics → LessWrong pipeline)
We could try to do something about it but I think it’s quite likely we’d end up solving the wrong problem because we’d be trying too much to recreate what we needed when we were 20 rather than what new people coming up need. Each of us has to rediscover how to live for ourselves, so our duty is mainly to leave behind lots of clues about things we’ve already figured out to speed them along their way.
Yes. No.
I don’t think it’s a problem for a couple reasons:
my AI timelines are short enough that it’s not going to become very pressing
if it does become a pressing problem it will be solved by a new generation of folks who will solve it themselves better than we did because they’ll live in the culture we affected (cf. the Reformation → the Enlightenment → Victornian-era Science → General Semantics → LessWrong pipeline)
We could try to do something about it but I think it’s quite likely we’d end up solving the wrong problem because we’d be trying too much to recreate what we needed when we were 20 rather than what new people coming up need. Each of us has to rediscover how to live for ourselves, so our duty is mainly to leave behind lots of clues about things we’ve already figured out to speed them along their way.