How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
As far as we know, it easily could require an insanely high amount of computing power. The thing is, there are things out there that have as much computing power as human brains—namely, human brains themselves. So if we ever become capable of building computers out of the same sort of stuff that human brains are built out of (namely, really tiny machines that use chemicals and stuff), we’ll certainly be able to create computers with the same amount of raw power as the human brain.
How hard will it be to create intelligent software to run on these machines? Well, creating intelligent beings is hard enough that humans haven’t managed to do it in a few decades of trying, but easy enough that evolution has done it in three billion years. I don’t think we know much else about how hard it is.
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right?
Well, “bootstrapping” is the idea of AI “pulling itself up by its own bootstraps”, or, in this case, “making itself more intelligent using its own intelligence”. The idea is that every time the AI makes itself more intelligent, it will be able to use its newfound intelligence to find even more ways to make itself more intelligent.
Is it possible that the AI will eventually “hit a wall”, and stop finding ways to improve itself? In a word, yes.
How does it know what bits to change to make itself more intelligent?
There’s no easy way. If it knows the purpose of each of its parts, then it might be able to look at a part, and come up with a new part that does the same thing better. Maybe it could look at the reasoning that went into designing itself, and think to itself something like, “What they thought here was adequate, but the system would work better if they had known this fact.” Then it could change the design, and so change itself.
As far as we know, it easily could require an insanely high amount of computing power. The thing is, there are things out there that have as much computing power as human brains—namely, human brains themselves. So if we ever become capable of building computers out of the same sort of stuff that human brains are built out of (namely, really tiny machines that use chemicals and stuff), we’ll certainly be able to create computers with the same amount of raw power as the human brain.
How hard will it be to create intelligent software to run on these machines? Well, creating intelligent beings is hard enough that humans haven’t managed to do it in a few decades of trying, but easy enough that evolution has done it in three billion years. I don’t think we know much else about how hard it is.
Well, “bootstrapping” is the idea of AI “pulling itself up by its own bootstraps”, or, in this case, “making itself more intelligent using its own intelligence”. The idea is that every time the AI makes itself more intelligent, it will be able to use its newfound intelligence to find even more ways to make itself more intelligent.
Is it possible that the AI will eventually “hit a wall”, and stop finding ways to improve itself? In a word, yes.
There’s no easy way. If it knows the purpose of each of its parts, then it might be able to look at a part, and come up with a new part that does the same thing better. Maybe it could look at the reasoning that went into designing itself, and think to itself something like, “What they thought here was adequate, but the system would work better if they had known this fact.” Then it could change the design, and so change itself.