I think “very” is much too strong, and insofar as this is true in the human world, that wouldn’t necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn’t be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
You’re thinking “one superintelligence against modern spam detection”… or really against 20 years ago spam detection. It’s no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that’s with current tech, by the time a superintelligence exists you’d have equally matched spam detection.
That’s my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren’t entirely fair, thus safeguarding the status quo.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom
Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who “understood AI” ten years ago, became a highly successful founder?
Roughly speaking, yes, I’d grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I’m not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It’s obviously east to “understand that it was the right direction”… With the benefit of hindsight. Much like now everyone “understands” transformers are the future of NLP.
But in general the field of “AI” has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don’t claim I’m among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I’m not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
A person who runs a company worth a few hundred millions is mainly spending his time managing people. There are plenty of cases where it makes more sense to listen to scientists who spend their time studying the subject then to managers when it comes to predicting future technology.
You’re thinking “one superintelligence against modern spam detection”… or really against 20 years ago spam detection. It’s no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that’s with current tech, by the time a superintelligence exists you’d have equally matched spam detection.
That’s my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren’t entirely fair, thus safeguarding the status quo.
<Also, I’d honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that “understood AI” 10 years ago and doesn’t own a company valued at a few hundred millions is like reading someone that “gets how trading works”, but works at Walmart and live with his mom>
Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who “understood AI” ten years ago, became a highly successful founder?
Roughly speaking, yes, I’d grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I’m not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It’s obviously east to “understand that it was the right direction”… With the benefit of hindsight. Much like now everyone “understands” transformers are the future of NLP.
But in general the field of “AI” has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don’t claim I’m among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I’m not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
A person who runs a company worth a few hundred millions is mainly spending his time managing people. There are plenty of cases where it makes more sense to listen to scientists who spend their time studying the subject then to managers when it comes to predicting future technology.