I can obviously do many basic everyday tasks, and I can do adequate software engineering, data science, linear algebra, transgender research, and various other things.
But I know basically nothing about chemistry, biology, neurology, advertising, geology, rocket science, law, business strategy, project management, political campaigning, anthropology, astronomy, etc. etc.. Further, because I’m mentally ill, I’m bad at social stuff and paying attention. I can also only work on one task at a time, rather than being able to work on millions of tasks at a time.
There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I’m not aware of anyone who can do millions of tasks at a time.
I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.
There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I’m not aware of anyone who can do millions of tasks at a time.
I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.
Now that’s smart.
I find this a persuasive demonstration of how an AI could attain a massive quantitative gap in capabilities with a human.
a massive quantitative gap in capabilities with a human
Quantity has a quality of its own:
An intelligence that becomes an expert in many sciences, could see connections that others would not notice.
Being faster can make a difference between solving a problem on time, and solving it too late. Merely being first means you can get a patent, become a Schelling point, establish a monopoly...
Reducing your mistake rate from 5% to 0.000001% allows you to design and execute much more complex plans.
(My point is that calling an advantage “quantitative” does not make it mostly harmless.)
This is because in real life, speed and resources matter because they’re both finite. Unlike a Turing machine that can assume both arbitrarily high memory and time, we don’t have such things.
I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won’t matter whether that’s qualitatively more powerful vs ‘just’ quantitatively.
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯
I’m stupid.
I can obviously do many basic everyday tasks, and I can do adequate software engineering, data science, linear algebra, transgender research, and various other things.
But I know basically nothing about chemistry, biology, neurology, advertising, geology, rocket science, law, business strategy, project management, political campaigning, anthropology, astronomy, etc. etc.. Further, because I’m mentally ill, I’m bad at social stuff and paying attention. I can also only work on one task at a time, rather than being able to work on millions of tasks at a time.
There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I’m not aware of anyone who can do millions of tasks at a time.
I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.
Now that’s smart.
I find this a persuasive demonstration of how an AI could attain a massive quantitative gap in capabilities with a human.
Quantity has a quality of its own:
An intelligence that becomes an expert in many sciences, could see connections that others would not notice.
Being faster can make a difference between solving a problem on time, and solving it too late. Merely being first means you can get a patent, become a Schelling point, establish a monopoly...
Reducing your mistake rate from 5% to 0.000001% allows you to design and execute much more complex plans.
(My point is that calling an advantage “quantitative” does not make it mostly harmless.)
+1 for “quantity has a quality all its own”. “More is different” pops up everywhere.
This is because in real life, speed and resources matter because they’re both finite. Unlike a Turing machine that can assume both arbitrarily high memory and time, we don’t have such things.
I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won’t matter whether that’s qualitatively more powerful vs ‘just’ quantitatively.
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯