I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
It seems like criticality is sufficient, bot not necessary, for TAI, and so only counting criticality scenarios causes underestimation.
This was a lot clearer, thank you.