But can you summarize what makes you think a German-Shepherd-level AI could self-improve at all?
I might not have made this clear: I don’t.
What I believe is that to build a Germand-Shepherd-level AI in the first place, you either need to:
1) create something that will learn and improve itself up to the corresponding level and then top out there somehow, or
2) understand enough about cognition and intelligence to fully abstract already-developed German-Shepherd-level intelligence in your initial codebase itself (AKA “spontaneously designed hard-coded virtual intelligence”), or
3) incrementally add more and more “pieces of intelligence” and “algorithm refinements” until your piece of generalized software can reason and learn as well as a German Shepherd through its collection of procedural tricks. This could reasonably be done either through machine learning / neural networks or through manual operator intervention (aka adding/replacing code once you notice a better way to do something).
There may be other methods that would be more practical, but if so, the difficulty in figuring them out seems sufficiently high for the total invention-to-finished-product difficulty to be even greater than the above solutions.
From personal experience in attempting (and failing) both 2) and 3) in the past, as well as discussing with professional videogame AI programmers (decidedly not the same “AI” as the type of AI generally discussed here, but where they would still immensely benefit from any of the above three solutions in various ways) who have also failed, I have strong reason to believe that solution 1) is easier.
None of the literature I’ve read so far even suggests that building an AI that is by intelligent design already at human-level intelligence right when turned on is anywhere near optimal or even remotely near the same order of magnitude of difficulty as FOOMing from the simplest possible code. Of course, it just might be that the simplest possible foom-capable mind is provably at least as smart as humans, but if so our prospects of making one in the first place would be low. This does not seem to be the case, if I rely on papers published by SIAI (though I’m very willing to embrace the opposite belief if evidence supports it, since I’d rather we be currently too stupid to make an AGI at all, from an X-risk perspective).
[to build a Germand-Shepherd-level AI you need to] create something that will learn and improve itself up to the corresponding level and then top out there somehow[.]
I’m not arguing yet, in case I’m missing something, but why do you think that something stupider than a German Shepherd would be better at improving itself up to GSD levels (and stop right there) than a human would be at doing the same job (i.e., improving the potential AGSD, not the human itself).
Or rather, why does it seem like you think it’s obvious? (Again, I’m not arguing, it just sounds counterintuitive and I’m curious what your intuition is.) It sounds a bit like you’re saying something like:
“Hey, I can’t tell, just by looking at my brain-damaged dog, how to built a non-brain-damaged dog. Also, repairing its brain is too hard (many dog experts tried and all failed). I think it’d be easier to make a brain-damaged dog that will fix its own brain damage.”
(Note that AGI in general does not fall under this analogy. Foom scenarios assume the seed is at least human-level, at least at the task of improving its intelligence. The whole premise of fooming is based on that initial advantage. Also note, I’m not saying it’s obviously impossible to make a super-idiot-savant AI that’s stupider than a GSD in general but really good at improving itself, just that’s it goes really hard against my intuition, and I’m curious why yours doesn’t. Don’t feel like you have to justify your intuition to me, but it would be nice to describe it in more detail.)
(Sorry for belated replies, I’ve been completely off LW for a few months and am only now going through my inbox)
I’m not arguing yet, in case I’m missing something, but why do you think that something stupider than a German Shepherd would be better at improving itself up to GSD levels (and stop right there) than a human would be at doing the same job (i.e., improving the potential AGSD, not the human itself).
This is not what I think, or at least not what I expressed. My thoughts are similar, but elaboration later; first, this was an option in parallel with the option where a human designs a complete AGSD and then turns it on, and with the option where a bunch of humans design sub-AGSD iterations up until the point where they obtain a final AGSD.
As for elaboration, I do think it’s easier to build a so-called super-idiot-savant sub-GSD-general-intelligence, post-human-self-improvement AI than building any sort of “out-of-the-box” general intelligence. I don’t currently recall my reasons, since my mind is set in a different mode, but the absurd and extreme case is that of having a human child. A human child is stupider than a GSD, but learns better than adult humans. It is also much simpler to do than any sort of AI programming. ;) But I only say this last in jest, and it isn’t particularly relevant to the discussion.
I might not have made this clear: I don’t.
What I believe is that to build a Germand-Shepherd-level AI in the first place, you either need to:
1) create something that will learn and improve itself up to the corresponding level and then top out there somehow, or
2) understand enough about cognition and intelligence to fully abstract already-developed German-Shepherd-level intelligence in your initial codebase itself (AKA “spontaneously designed hard-coded virtual intelligence”), or
3) incrementally add more and more “pieces of intelligence” and “algorithm refinements” until your piece of generalized software can reason and learn as well as a German Shepherd through its collection of procedural tricks. This could reasonably be done either through machine learning / neural networks or through manual operator intervention (aka adding/replacing code once you notice a better way to do something).
There may be other methods that would be more practical, but if so, the difficulty in figuring them out seems sufficiently high for the total invention-to-finished-product difficulty to be even greater than the above solutions.
From personal experience in attempting (and failing) both 2) and 3) in the past, as well as discussing with professional videogame AI programmers (decidedly not the same “AI” as the type of AI generally discussed here, but where they would still immensely benefit from any of the above three solutions in various ways) who have also failed, I have strong reason to believe that solution 1) is easier.
None of the literature I’ve read so far even suggests that building an AI that is by intelligent design already at human-level intelligence right when turned on is anywhere near optimal or even remotely near the same order of magnitude of difficulty as FOOMing from the simplest possible code. Of course, it just might be that the simplest possible foom-capable mind is provably at least as smart as humans, but if so our prospects of making one in the first place would be low. This does not seem to be the case, if I rely on papers published by SIAI (though I’m very willing to embrace the opposite belief if evidence supports it, since I’d rather we be currently too stupid to make an AGI at all, from an X-risk perspective).
I’m not arguing yet, in case I’m missing something, but why do you think that something stupider than a German Shepherd would be better at improving itself up to GSD levels (and stop right there) than a human would be at doing the same job (i.e., improving the potential AGSD, not the human itself).
Or rather, why does it seem like you think it’s obvious? (Again, I’m not arguing, it just sounds counterintuitive and I’m curious what your intuition is.) It sounds a bit like you’re saying something like:
“Hey, I can’t tell, just by looking at my brain-damaged dog, how to built a non-brain-damaged dog. Also, repairing its brain is too hard (many dog experts tried and all failed). I think it’d be easier to make a brain-damaged dog that will fix its own brain damage.”
(Note that AGI in general does not fall under this analogy. Foom scenarios assume the seed is at least human-level, at least at the task of improving its intelligence. The whole premise of fooming is based on that initial advantage. Also note, I’m not saying it’s obviously impossible to make a super-idiot-savant AI that’s stupider than a GSD in general but really good at improving itself, just that’s it goes really hard against my intuition, and I’m curious why yours doesn’t. Don’t feel like you have to justify your intuition to me, but it would be nice to describe it in more detail.)
(Sorry for belated replies, I’ve been completely off LW for a few months and am only now going through my inbox)
This is not what I think, or at least not what I expressed. My thoughts are similar, but elaboration later; first, this was an option in parallel with the option where a human designs a complete AGSD and then turns it on, and with the option where a bunch of humans design sub-AGSD iterations up until the point where they obtain a final AGSD.
As for elaboration, I do think it’s easier to build a so-called super-idiot-savant sub-GSD-general-intelligence, post-human-self-improvement AI than building any sort of “out-of-the-box” general intelligence. I don’t currently recall my reasons, since my mind is set in a different mode, but the absurd and extreme case is that of having a human child. A human child is stupider than a GSD, but learns better than adult humans. It is also much simpler to do than any sort of AI programming. ;) But I only say this last in jest, and it isn’t particularly relevant to the discussion.
OK, thanks for clarifying.