I’m not totally sure what you’re referring to, but if you’re talking about Paul’s guess of “~15% on singularity by 2030 and ~40% on singularity by 2040”, then I want to point out that looking at these twoquestions, his prediction seems in line with the Metaculus community prediction
I don’t think it will ever seem plausible for an accident to turn everyone’s atoms into ML hardware though, because we will probably remain closer to an equilibrium with no free energy for powerful AI to harvest.
I disagree with the community on that. Knocking out silver turing, Montezuma (in the way described), 90% equivalent on Winogrande, and 75th percentile on maths SAT will either take longer to be actually demonstrated in a unified ML system, OR it will happen way sooner than 39 months before “an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain.”, which is incredibly broad. If the questions mean what they are written to mean, as I read them, it’s a hell of a lot more than 39 months (median community estimate).
The thing I said is about some important scenarios described by people giving significant probability to a hostile hard takeoff scenario. I included the comment here in this subthread because I don’t think it contributed much to the discussion.
I’m not totally sure what you’re referring to, but if you’re talking about Paul’s guess of “~15% on singularity by 2030 and ~40% on singularity by 2040”, then I want to point out that looking at these two questions, his prediction seems in line with the Metaculus community prediction
I don’t think it will ever seem plausible for an accident to turn everyone’s atoms into ML hardware though, because we will probably remain closer to an equilibrium with no free energy for powerful AI to harvest.
I disagree with the community on that. Knocking out silver turing, Montezuma (in the way described), 90% equivalent on Winogrande, and 75th percentile on maths SAT will either take longer to be actually demonstrated in a unified ML system, OR it will happen way sooner than 39 months before “an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain.”, which is incredibly broad. If the questions mean what they are written to mean, as I read them, it’s a hell of a lot more than 39 months (median community estimate).
The thing I said is about some important scenarios described by people giving significant probability to a hostile hard takeoff scenario. I included the comment here in this subthread because I don’t think it contributed much to the discussion.