I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM”.
We are witness to Moore’s law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don’t have an argument with that. What I would object to are saltations. Those are suggested by the term “suddenly”—but are contrary to evolutionary theory.
Probably, things will be progressing fastest well after the human era is over. It’s a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.
Every organization that’s not a country is far enough away from that level of power that I don’t expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.
So: giant oaks from tiny acorns grow—and it is easiest to influence creatures when they are young.
I am aware that there’s an argument that at some point things will be changing rapidly:
We are witness to Moore’s law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don’t have an argument with that. What I would object to are saltations. Those are suggested by the term “suddenly”—but are contrary to evolutionary theory.
Probably, things will be progressing fastest well after the human era is over. It’s a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.
So: giant oaks from tiny acorns grow—and it is easiest to influence creatures when they are young.