I think “very” is much too strong, and insofar as this is true in the human world, that wouldn’t necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn’t be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
Also, there are already robot bodies capable of doing I think the vast majority of physical labor jobs. The only reason they’re not doing those jobs today is inadequate algorithms.
The apparatus humans use to “understand” other human is not just a complex probabilistic function based on observing them, but rather it’s an immensely complex simulation which we adjust based on our observations, a simulation that we might never be able to efficiently run on a computer.
I think being charismatic over the internet is easier than you’re suggesting … if people would open up to ELIZA, I think they would open up to an AGI that has studied ELIZA and also has had extensive practice talking to people. Also, I don’t think that the algorithms underlying human empathy are as computationally intensive as you do, but that’s a more complicated story, maybe not worth getting into here.
That aside, the question remains of whether or not solving all “thinking bottlenecks” would leave us with a process of scientific advancement that is somewhat faster than what we have today (slow road to progress) or exponentially faster (singularity).
I think you’re overly focused on “scientific advancement”. Existing scientific and technological knowledge plus AGI could bring the entire world up to first-world standards of living, and eliminate the need for any human to ever work again. That’s nothing to scoff at!
The vast majority of “good thinkers” (under an IQ/math/language/memory == intelligence paradigm) are funnelled towards intern[et] companies, no extra requirements, not even a diploma, if you have enough “raw intelligence”. Under the EMH that would indicate those companies have the most need for them. Yet internet companies are essentially devoid of any practical implications when it comes to reality, they aren’t always engaged in “zero-sum” games, but they are still “competitive”, in that their ultimate reason is to convince people they want/need more things and that those things are more valuable, they aren’t “creating” any tangible things. On the other hand, research universities and companies interested in exploring the real world seem to care much less about intelligence...
I’m not sure you’re applying EMH properly. EMH would imply that the most intelligent people (if choosing jobs purely based on pay) would go to jobs where they have the highest marginal impact on firm revenue, compared to a marginally less intelligent person. If research universities don’t offer salaries as high as Facebook, that doesn’t mean that research universities don’t “care” about getting intelligent people, it probably means, for example, that the exchange rate between marginal professor intelligence and marginal grant revenue isn’t high enough to support Facebook-level salaries, and moreover universities are in part of a job market where lots of smart people will definitely apply for a professor job even if the salary is much lower than Facebook’s. The fact that academia has rampant credentialism and Facebook doesn’t is, umm, not related to the EMH, I would suggest. I think it’s more related to Eliezer’s “Inadequate Equilibria” stuff.
I think “very” is much too strong, and insofar as this is true in the human world, that wouldn’t necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn’t be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
You’re thinking “one superintelligence against modern spam detection”… or really against 20 years ago spam detection. It’s no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that’s with current tech, by the time a superintelligence exists you’d have equally matched spam detection.
That’s my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren’t entirely fair, thus safeguarding the status quo.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom
Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who “understood AI” ten years ago, became a highly successful founder?
Roughly speaking, yes, I’d grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I’m not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It’s obviously east to “understand that it was the right direction”… With the benefit of hindsight. Much like now everyone “understands” transformers are the future of NLP.
But in general the field of “AI” has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don’t claim I’m among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I’m not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
A person who runs a company worth a few hundred millions is mainly spending his time managing people. There are plenty of cases where it makes more sense to listen to scientists who spend their time studying the subject then to managers when it comes to predicting future technology.
I would agree with “superintelligence is not literally omnipotence” but I think I think you’re making overly strong claims in the opposite direction. My reasons are basically contained in Intelligence Explosion Microeconomics, That Alien Message, and Scott Alexander’s Superintelligence FAQ. For example...
I think “very” is much too strong, and insofar as this is true in the human world, that wouldn’t necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn’t be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
Also, there are already robot bodies capable of doing I think the vast majority of physical labor jobs. The only reason they’re not doing those jobs today is inadequate algorithms.
I think being charismatic over the internet is easier than you’re suggesting … if people would open up to ELIZA, I think they would open up to an AGI that has studied ELIZA and also has had extensive practice talking to people. Also, I don’t think that the algorithms underlying human empathy are as computationally intensive as you do, but that’s a more complicated story, maybe not worth getting into here.
I think you’re overly focused on “scientific advancement”. Existing scientific and technological knowledge plus AGI could bring the entire world up to first-world standards of living, and eliminate the need for any human to ever work again. That’s nothing to scoff at!
I’m not sure you’re applying EMH properly. EMH would imply that the most intelligent people (if choosing jobs purely based on pay) would go to jobs where they have the highest marginal impact on firm revenue, compared to a marginally less intelligent person. If research universities don’t offer salaries as high as Facebook, that doesn’t mean that research universities don’t “care” about getting intelligent people, it probably means, for example, that the exchange rate between marginal professor intelligence and marginal grant revenue isn’t high enough to support Facebook-level salaries, and moreover universities are in part of a job market where lots of smart people will definitely apply for a professor job even if the salary is much lower than Facebook’s. The fact that academia has rampant credentialism and Facebook doesn’t is, umm, not related to the EMH, I would suggest. I think it’s more related to Eliezer’s “Inadequate Equilibria” stuff.
You’re thinking “one superintelligence against modern spam detection”… or really against 20 years ago spam detection. It’s no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that’s with current tech, by the time a superintelligence exists you’d have equally matched spam detection.
That’s my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren’t entirely fair, thus safeguarding the status quo.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who “understood AI” ten years ago, became a highly successful founder?
Roughly speaking, yes, I’d grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I’m not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It’s obviously east to “understand that it was the right direction”… With the benefit of hindsight. Much like now everyone “understands” transformers are the future of NLP.
But in general the field of “AI” has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don’t claim I’m among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I’m not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
A person who runs a company worth a few hundred millions is mainly spending his time managing people. There are plenty of cases where it makes more sense to listen to scientists who spend their time studying the subject then to managers when it comes to predicting future technology.