I’m asking for references because I don’t have them. it’s a shame that the people who are able, ability-wise, to explain the flaws in the MIRI/FHI approach, actual AI researchers, aren’t able, time-wise, to do so. It leads to MIRI’s views dominating in a way that they should not. It’s anomalous that a bunch of amateurs should become the de facto experts in a field, just because they have funding , publicity, and spare time.
I’m asking for references because I don’t have them. it’s a shame that the people who are able, ability-wise, to explain the flaws in the MIRI/FHI approach
MIRI/FHI arguments essentially boil down to “you can’t prove that AI FOOM is impossible”.
Arguments of this form, e.g. “You can’t prove that [snake oil/cryonics/cold fusion] doesn’t work” , “You can’t prove there is no God”, etc. can’t be conclusively refuted.
Various AI experts have expressed skepticism in an imminent super-human AI FOOM, pointing out that the capability required for such scenario, if it is even possible, are far beyond what they see in their daily cutting-edge research on AI, and there are still lots of problems that need to be solved before even approaching human-level AGI. I doubt that these expert would have much to gain from keeping to argue over all the countless variations of the same argument that MIRI/FHI can generate.
For almost any goal an AI had, the AI would make more progress towards this goal if it became smarter. As an AI became smarter it would become better at making itself smarter. This process continues. Imagine if it were possible to quickly make a copy of yourself that had a slightly different brain. You could then test the new self and see if it was an improvement. If it was you could make this new self the permanent you. You could do this to quickly become much, much smarter. An AI could do this.
For almost any goal an AI had, the AI would make more progress towards this goal if it became smarter.
True, but there it is likely that there are diminishing returns in how much adding more intelligence can help with other goals, including the instrumental goal of becoming smarter.
As an AI became smarter it would become better at making itself smarter.
Eventual diminishing returns, perhaps but probably long after it was smart enough to do what it wanted with Earth.
A drug that raised the IQ of human programmers would make the programmers better programmers. Also, intelligence is the ability to solve complex problems in complex environments so it does (tautologically) follow.
Eventual diminishing returns, perhaps but probably long after it was smart enough to do what it wanted with Earth.
Why?
A drug that raised the IQ of human programmers would make the programmers better programmers.
The proper analogy is with a drug that raised the IQ of researchers who invent the drugs that increase IQ. Does this lead to an intelligence explosion? Probably not. If the number of IQ points that you need to discover the next drug in a constant time increases faster than the number of IQ points that the next drug gives you, then you will run into diminishing returns.
It doesn’t seem to be much different with computers.
Algorithmic efficiency is bounded: for any given computational problem, once you have the best algorithm for it, for whatever performance measure you care for, you can’t improve on it anymore. And in fact long before you reached the perfect algorithm you’ll already have run into diminishing returns in terms of effort vs. improvement: past some point you are tweaking low-level details in order to get small performance improvements.
Once you have maxed out algorithmic efficiency, you can only improve by increasing hardware resources, but this 1) requires significant interaction with the physical world, and 2) runs into asymptotic complexity issues: for most AI problems worst-case complexity is at least exponential, average case complexity is more difficult to estimate but most likely super-linear. Take a look at the AlphaGo paper for instance, figure 4c shows how ELO rating increases with the number of CPUs/GPUs/machines. The trend is logarithmic at best, logistic at worst.
Now of course you could insist that it can’t be disproved that significant diminishing returns will kick in before AGI reaches strongly super-human level, but, as I said, this is an unfalsifiable argument from ignorance.
Cute. Now try quantifying that argument. How much data needs to be considered / collected to make each incremental improvement? Does that grow over time, and how fast? What is the failure rate (chance a change makes you dumber not smarter)? What is the critical failure rate (chance a change makes you permanently incapacitated)? How much testing and analysis is required to confidently have a low critical error rate?
When you look at it as an engineer not a philosopher, the answers are not so obvious.
I’m asking for references because I don’t have them. it’s a shame that the people who are able, ability-wise, to explain the flaws in the MIRI/FHI approach, actual AI researchers, aren’t able, time-wise, to do so. It leads to MIRI’s views dominating in a way that they should not. It’s anomalous that a bunch of amateurs should become the de facto experts in a field, just because they have funding , publicity, and spare time.
It’s not a unique circumstance. I work in Bitcoin and I assure you we are seeing the same thing right now. I suspect it is a general phenomenon.
MIRI/FHI arguments essentially boil down to “you can’t prove that AI FOOM is impossible”.
Arguments of this form, e.g. “You can’t prove that [snake oil/cryonics/cold fusion] doesn’t work” , “You can’t prove there is no God”, etc. can’t be conclusively refuted.
Various AI experts have expressed skepticism in an imminent super-human AI FOOM, pointing out that the capability required for such scenario, if it is even possible, are far beyond what they see in their daily cutting-edge research on AI, and there are still lots of problems that need to be solved before even approaching human-level AGI. I doubt that these expert would have much to gain from keeping to argue over all the countless variations of the same argument that MIRI/FHI can generate.
I don’t agree.
That’s a 741 pages book, can you summarize a specific argument?
For almost any goal an AI had, the AI would make more progress towards this goal if it became smarter. As an AI became smarter it would become better at making itself smarter. This process continues. Imagine if it were possible to quickly make a copy of yourself that had a slightly different brain. You could then test the new self and see if it was an improvement. If it was you could make this new self the permanent you. You could do this to quickly become much, much smarter. An AI could do this.
True, but there it is likely that there are diminishing returns in how much adding more intelligence can help with other goals, including the instrumental goal of becoming smarter.
Nope, doesn’t follow.
Eventual diminishing returns, perhaps but probably long after it was smart enough to do what it wanted with Earth.
A drug that raised the IQ of human programmers would make the programmers better programmers. Also, intelligence is the ability to solve complex problems in complex environments so it does (tautologically) follow.
Why?
The proper analogy is with a drug that raised the IQ of researchers who invent the drugs that increase IQ. Does this lead to an intelligence explosion? Probably not. If the number of IQ points that you need to discover the next drug in a constant time increases faster than the number of IQ points that the next drug gives you, then you will run into diminishing returns.
It doesn’t seem to be much different with computers.
Algorithmic efficiency is bounded: for any given computational problem, once you have the best algorithm for it, for whatever performance measure you care for, you can’t improve on it anymore. And in fact long before you reached the perfect algorithm you’ll already have run into diminishing returns in terms of effort vs. improvement: past some point you are tweaking low-level details in order to get small performance improvements.
Once you have maxed out algorithmic efficiency, you can only improve by increasing hardware resources, but this 1) requires significant interaction with the physical world, and 2) runs into asymptotic complexity issues: for most AI problems worst-case complexity is at least exponential, average case complexity is more difficult to estimate but most likely super-linear. Take a look at the AlphaGo paper for instance, figure 4c shows how ELO rating increases with the number of CPUs/GPUs/machines. The trend is logarithmic at best, logistic at worst.
Now of course you could insist that it can’t be disproved that significant diminishing returns will kick in before AGI reaches strongly super-human level, but, as I said, this is an unfalsifiable argument from ignorance.
Cute. Now try quantifying that argument. How much data needs to be considered / collected to make each incremental improvement? Does that grow over time, and how fast? What is the failure rate (chance a change makes you dumber not smarter)? What is the critical failure rate (chance a change makes you permanently incapacitated)? How much testing and analysis is required to confidently have a low critical error rate?
When you look at it as an engineer not a philosopher, the answers are not so obvious.