Because by the time you’ve managed to solve the problem of making it to humanville, you probably know enough to keep going.
There’s nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We’re given a system that no one understands and no one knows how to modify and we’re having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system.
If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.
I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited “working” executive memory.
The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at “the same time” and see the key connections, all in one act of synthesis. We all struggle privately with this… some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then “grokking” the next piece....and gluing it together at the end.
Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.)
Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the “dream of a snake biting its tall for the benzene ring” sort of thing.)
If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI.
So, yes, the artificial human level AI could understand this.
My point was that we can build in physical controls… monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can “stop them at the airport.”
It doesn’t matter if designs are leaked onto the internet, and an AI gets near an internet terminal and looks itself up. I can look MYSELF up on PubMed, but I can’t just think my BDNF levels to improve here and there, and my DA to 5-HT ratio to improve elsewehere..
To strengthen this point about the key distinction between knowing vs doing, let me explain that, and why, I disagree with your second point, at least with the force of it.
In effect, OUR designs are leaked onto the internet, already.
I think the information for us to self-modify our wetware is within reach. Good neuroscientists, or even people like me, a very smart amateur (and there are much more knowledgable cognitive neurobiology researchers than myself) can nearly tell you, both in principle and in some biology, how to do some intelligence amplification by modifying known aspects of our neurobiology.
(I could, especially with help, come up with some detail on a scale of months about changing neuromodulators, neurosteroids, connectivity hotspots, factors regulating LTP (one has to step lightly, of course, just like one would if screwing around with telomers or hayflick limits) and given a budget, a smart team, and no distractions, I bet in a year or two, a team could do something quite significant) with how to change the human brain, carefully changing areas of plasticity, selective neurogenesis.… et.
So for all practical purposes, we are already like an AI built out of ASICs who would have to not so much reverse engineer its design, but get access to instrumentality. And again, what about physical security metnods? They would work for a while, I am saying). And that would give us a key window to gain experience, see if they develop (given they are close enought to being sentient, OR that they have autonomy and some degree of “creativity”) “psychological problems” or tendencies to go rogue. (I am doing an essay on that, not as silly as it sounds)
THe point is, as long as the AIs need external significant instrumentality to instantiate a new design, and as long as they can be monitored and physically controlled, we can nearly guarantee ourselves a designed layover at Humanville.
We don’t have to put their critical design architecture in flash drives in their head, so to speak, and give then, further, a designed ability to reflash their own architecture just by “thinking” about it.
If I were an ASIC-implemented AI why would I need an ASIC factory? Why wouldn’t I just create a software replica of myself on general purpose computing hardware, i.e. become an upload?
I know next to nothing about neuroscience, but as far as I can tell, we’re a long way from the sort of understanding of human cognition necessary to create an upload, but going from an ASIC to an upload is trivial.
I’m also not at all convinced that I want a layover at humanville. I’m not super thrilled by the idea of creating a whole bunch of human level intelligent machines with values that differ widely from my own. That seems functionally equivalent to proposing a mass-breeding program aiming to produce psychologically disturbed humans.
Because by the time you’ve managed to solve the problem of making it to humanville, you probably know enough to keep going.
There’s nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We’re given a system that no one understands and no one knows how to modify and we’re having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system.
If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.
Leplen,
I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited “working” executive memory.
The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at “the same time” and see the key connections, all in one act of synthesis. We all struggle privately with this… some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then “grokking” the next piece....and gluing it together at the end. Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.)
Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the “dream of a snake biting its tall for the benzene ring” sort of thing.)
If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI.
So, yes, the artificial human level AI could understand this.
My point was that we can build in physical controls… monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can “stop them at the airport.”
It doesn’t matter if designs are leaked onto the internet, and an AI gets near an internet terminal and looks itself up. I can look MYSELF up on PubMed, but I can’t just think my BDNF levels to improve here and there, and my DA to 5-HT ratio to improve elsewehere..
To strengthen this point about the key distinction between knowing vs doing, let me explain that, and why, I disagree with your second point, at least with the force of it.
In effect, OUR designs are leaked onto the internet, already.
I think the information for us to self-modify our wetware is within reach. Good neuroscientists, or even people like me, a very smart amateur (and there are much more knowledgable cognitive neurobiology researchers than myself) can nearly tell you, both in principle and in some biology, how to do some intelligence amplification by modifying known aspects of our neurobiology.
(I could, especially with help, come up with some detail on a scale of months about changing neuromodulators, neurosteroids, connectivity hotspots, factors regulating LTP (one has to step lightly, of course, just like one would if screwing around with telomers or hayflick limits) and given a budget, a smart team, and no distractions, I bet in a year or two, a team could do something quite significant) with how to change the human brain, carefully changing areas of plasticity, selective neurogenesis.… et.
So for all practical purposes, we are already like an AI built out of ASICs who would have to not so much reverse engineer its design, but get access to instrumentality. And again, what about physical security metnods? They would work for a while, I am saying). And that would give us a key window to gain experience, see if they develop (given they are close enought to being sentient, OR that they have autonomy and some degree of “creativity”) “psychological problems” or tendencies to go rogue. (I am doing an essay on that, not as silly as it sounds)
THe point is, as long as the AIs need external significant instrumentality to instantiate a new design, and as long as they can be monitored and physically controlled, we can nearly guarantee ourselves a designed layover at Humanville.
We don’t have to put their critical design architecture in flash drives in their head, so to speak, and give then, further, a designed ability to reflash their own architecture just by “thinking” about it.
If I were an ASIC-implemented AI why would I need an ASIC factory? Why wouldn’t I just create a software replica of myself on general purpose computing hardware, i.e. become an upload?
I know next to nothing about neuroscience, but as far as I can tell, we’re a long way from the sort of understanding of human cognition necessary to create an upload, but going from an ASIC to an upload is trivial.
I’m also not at all convinced that I want a layover at humanville. I’m not super thrilled by the idea of creating a whole bunch of human level intelligent machines with values that differ widely from my own. That seems functionally equivalent to proposing a mass-breeding program aiming to produce psychologically disturbed humans.