What does the “further inferences and estimations” refer to?
Basically the hundreds of hours it would take MIRI to close the inferential distance between them and AI experts. See e.g. this comment by
Luke Muehlhauser:
I agree with Eliezer that the main difficulty is in getting top-quality, relatively rational people to spend hundreds of hours being educated, working through the arguments, etc.
If your arguments are this complex then you are probably wrong.
But note that an intelligence explosion is sufficient but not necessary for AGI to be risky: just because development is gradual doesn’t mean that it will be safe.
I do not disagree with that kind of AI risks. If MIRI is working on mitigating AI risks that do not require an intelligence explosion, a certain set of AI drives and a bunch of, from my perspective, very unlikely developments...then I was not aware of that.
Hard to say for sure, but note that few technologies are safe unless people work to make them safe, and the more complex the technology, the more effort is needed to ensure that no unexpected situations crop up where it turns out to be unsafe after all.
This seems very misleading. We are after all talking about a technology that works perfectly well at being actively unsafe. You have to get lots of things right, e.g. that the AI cares to take over the world, knows how to improve itself, and manages to hide its true intentions before it can do so etc. etc. etc.
Expert disagreement is a viable reason to put reduced weight on the arguments, true, but this bullet point doesn’t indicate exactly what parts they disagree on.
There is a reason why MIRI doesn’t know this. Look at the latest interviews with experts conducted by Luke Muehlhauser. He doesn’t even try to figure out if they disagree with Xenu, but only asks uncontroversial questions.
This is what frustrates me about a lot of Kruel’s comments: often they seem to be presupposing some awfully narrow and specific scenario...
Crazy...this is why I am criticizing MIRI. A focus on an awfully narrow and specific scenario rather than AI risks in general.
...suppose that if you control five computers rather than just one, you can’t become qualitatively more intelligent, but you can do five times as many things at the same time...
Consider that the U.S. had many more and smarter people than the Taliban. The bottom line being that the U.S. devoted a lot more output per man-hour to defeat a completely inferior enemy. Yet their advantage apparently did scale sublinearly.
I guess this means something like “will there be a point where it won’t be useful for the AI to invest in self-improvement anymore”. If you frame it that way, the answer is obviously yes: you can’t improve forever. But that’s not an interesting question: the interesting question is whether the AI will hit that point before it has obtained any considerable advantage over humans.
I do not disagree that there are minds better at social engineering than that of e.g. Hitler, but I strongly doubt that there are minds which are vastly better. Optimizing a political speech for 10 versus a million subjective years won’t make it one hundred thousand times more persuasive.
Is there any reason to assume that such a process would have produced creatures with no major room for improvement?
The question is if just because humans are much smarter and stronger they can actually wipe out mosquitoes. Well, they can...but it is either very difficult or will harm humans.
Also, who’s to say that the AI couldn’t do real-world experimentation?
You already need to build huge particle accelerators to gain new physical insights and need a whole technological civilization in order to build an iPhone. You can’t just get around this easily and overnight.
Everything else you wrote I already discuss in detail in various posts.
Basically the hundreds of hours it would take MIRI to close the inferential distance between them and AI experts. See e.g. this comment by Luke Muehlhauser:
If your arguments are this complex then you are probably wrong.
I do not disagree with that kind of AI risks. If MIRI is working on mitigating AI risks that do not require an intelligence explosion, a certain set of AI drives and a bunch of, from my perspective, very unlikely developments...then I was not aware of that.
This seems very misleading. We are after all talking about a technology that works perfectly well at being actively unsafe. You have to get lots of things right, e.g. that the AI cares to take over the world, knows how to improve itself, and manages to hide its true intentions before it can do so etc. etc. etc.
There is a reason why MIRI doesn’t know this. Look at the latest interviews with experts conducted by Luke Muehlhauser. He doesn’t even try to figure out if they disagree with Xenu, but only asks uncontroversial questions.
Crazy...this is why I am criticizing MIRI. A focus on an awfully narrow and specific scenario rather than AI risks in general.
Consider that the U.S. had many more and smarter people than the Taliban. The bottom line being that the U.S. devoted a lot more output per man-hour to defeat a completely inferior enemy. Yet their advantage apparently did scale sublinearly.
I do not disagree that there are minds better at social engineering than that of e.g. Hitler, but I strongly doubt that there are minds which are vastly better. Optimizing a political speech for 10 versus a million subjective years won’t make it one hundred thousand times more persuasive.
The question is if just because humans are much smarter and stronger they can actually wipe out mosquitoes. Well, they can...but it is either very difficult or will harm humans.
You already need to build huge particle accelerators to gain new physical insights and need a whole technological civilization in order to build an iPhone. You can’t just get around this easily and overnight.
Everything else you wrote I already discuss in detail in various posts.