Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.
Simultaneously, our non-universal but still rather general fast deep/ recurrent neural networks have already started to outperform traditional pre-programmed methods: they recently collected a string of 1st ranks in many important visual pattern recognition benchmarks, e.g. Graves & Schmidhuber (2009); Ciresan et al. (2011): IJCNN traffic sign competition, NORB, CIFAR10, MNIST, three ICDAR handwriting competitions. Here we greatly profit from recent advances in computing hardware, using GPUs (mini-supercomputers normally used for video games) 100 times faster than today’s CPU cores, and a million times faster than PCs of 20 years ago, complementing the recent above-mentioned progress in the theory of mathematically optimal universal problem solvers.
On falsified predictions of AI progress:
I feel that after 10,000 years of civilization there is no need to justify pessimism through comparatively recent over-optimistic and self-serving predictions (1960s: ‘only 10 instead of 100 years needed to build AIs’) by a few early AI enthusiasts in search of funding.
Pessimism:
All attempts at making sure there will be only provably friendly AIs seem doomed though. Once somebody posts the recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. The survivors will define in hindsight what’s ‘moral’, since only survivors promote their values...
The Hard Problem dissolved?
But at least we have pretty good ideas where the symbols and self-symbols underlying consciousness and sentience come from (Schmidhuber, 2009a; 2010). They may be viewed as simple by-products of data compression and problem solving. As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data histories we are observing. If the predictor/compressor is an artificial recurrent neural network (RNN) (Werbos, 1988; Williams & Zipser, 1994; Schmidhuber, 1992; Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2009), it will create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole. Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself (Schmidhuber, 2009a; 2010). Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.
A Gödel machine, if one were to exist, surely wouldn’t do something so blatantly stupid as posting to the Internet a “recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions”. Why can’t humanity aspire to this rather minimal standard of intelligence and rationality?
Will AIXI replicate itself or procreate?
Likely yes, if AIXI believes that clones or descendants are useful for its own goals.
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn’t build another AIXI, why should we? Because we’re just too dumb?
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn’t figure out how to get as good a result with another design (under real constraints).
That was meant to be rhetorical… I’m hoping that the hypothetical person who’s planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think “Hmm, a Gödel machine is supposed to be smart and it wouldn’t publish its own recipe. Maybe I should give this a second thought.”
If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner’s power and levelling things a little. Such an act may not be irrational—if it is a form of self-defense.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
The fact that they are a secretive monopolist doesn’t bode well, though.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are
usually unknown. From outside, it will likely seem pretty clear
that only a secretive elite having the technology is more likely
to result in a massive wealth and power inequalities than what would
happen if everyone had access. Large wealth and power inequalities
seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
All attempts at making sure there will be only provably friendly AIs seem doomed though. Once somebody posts the recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. The survivors will define in hindsight what’s ‘moral’, since only survivors promote their values...
That seems more likely than a secretive monoplolistic agent keeping the technology for themselves from the beginning—and obliterating all potential rivals.
Keeping the technology of general-purpose inductive inference secret seems unlikely to happen in practice. It is going to go into embedded devices—from which it will inevitably be reverse engineered and made publicly accessible. Also, it’s likely to arise from a public collaborative development effort in the first place. I am inclined to doubt whether anyone can win while keeping their technology on a secure server—try to do that and you will just be overtaken—or rather, you will never be in the lead in the first place.
Not pessimism, realism, is my assessment. You have to apply your efforts where they will actually make a difference.
Schmidhuber paper
Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.
On falsified predictions of AI progress:
Pessimism:
The Hard Problem dissolved?
A Gödel machine, if one were to exist, surely wouldn’t do something so blatantly stupid as posting to the Internet a “recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions”. Why can’t humanity aspire to this rather minimal standard of intelligence and rationality?
Similar theme from Hutter’s paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn’t build another AIXI, why should we? Because we’re just too dumb?
I like lines of inquiry like this one and would like it if they showed up more.
I’m not sure what you mean by “lines of inquiry like this one”. Can you explain?
I guess it’s not a natural kind, it just had a few things I like all jammed together compactly:
Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers.
Talks about creation qua creation rather than creation as some implicit kind of self-modification.
Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn’t figure out how to get as good a result with another design (under real constraints).
I’m sure you can come up with several reasons for that.
That was meant to be rhetorical… I’m hoping that the hypothetical person who’s planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think “Hmm, a Gödel machine is supposed to be smart and it wouldn’t publish its own recipe. Maybe I should give this a second thought.”
If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner’s power and levelling things a little. Such an act may not be irrational—if it is a form of self-defense.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
So we estimate based on what we anticipate about the possible state of society.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
That seems more likely than a secretive monoplolistic agent keeping the technology for themselves from the beginning—and obliterating all potential rivals.
Keeping the technology of general-purpose inductive inference secret seems unlikely to happen in practice. It is going to go into embedded devices—from which it will inevitably be reverse engineered and made publicly accessible. Also, it’s likely to arise from a public collaborative development effort in the first place. I am inclined to doubt whether anyone can win while keeping their technology on a secure server—try to do that and you will just be overtaken—or rather, you will never be in the lead in the first place.
Not pessimism, realism, is my assessment. You have to apply your efforts where they will actually make a difference.