You speak with such a confident authoritative tone, but it is so hard to parse what your actual conclusions are.
You are refuting Paul’s core conclusion that there’s a “30% chance of TAI by 2033,” but your long refutation is met with: “wait, are you trying to say that you think 30% is too high or too low?” Pretty clear sign you’re not communicating yourself properly.
Even your answer to his direct follow-up question: “Do you think 30% is too low or too high for July 2033?” was hard to parse. You did not say something simple and easily understandable like, “I think 30% is too high for these reasons: …” you say “Once criticality is achieved the odds drop to 0 [+ more words].” The odds of what drop to zero? The odds of TAI? But you seem to be saying that once criticality is reached, TAI is inevitable? Even the rest of your long answer leaves in doubt where you’re really coming down on the premise.
By the way, I don’t think I would even be making this comment myself if A) I didn’t have such a hard time trying to understand what your conclusions were myself and B) you didn’t have such a confident, authoritative tone that seemed to present your ideas as if they were patently obvious.
I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
You speak with such a confident authoritative tone, but it is so hard to parse what your actual conclusions are.
You are refuting Paul’s core conclusion that there’s a “30% chance of TAI by 2033,” but your long refutation is met with: “wait, are you trying to say that you think 30% is too high or too low?” Pretty clear sign you’re not communicating yourself properly.
Even your answer to his direct follow-up question: “Do you think 30% is too low or too high for July 2033?” was hard to parse. You did not say something simple and easily understandable like, “I think 30% is too high for these reasons: …” you say “Once criticality is achieved the odds drop to 0 [+ more words].” The odds of what drop to zero? The odds of TAI? But you seem to be saying that once criticality is reached, TAI is inevitable? Even the rest of your long answer leaves in doubt where you’re really coming down on the premise.
By the way, I don’t think I would even be making this comment myself if A) I didn’t have such a hard time trying to understand what your conclusions were myself and B) you didn’t have such a confident, authoritative tone that seemed to present your ideas as if they were patently obvious.
I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
It seems like criticality is sufficient, bot not necessary, for TAI, and so only counting criticality scenarios causes underestimation.
This was a lot clearer, thank you.