A Gödel machine, if one were to exist, surely wouldn’t do something so blatantly stupid as posting to the Internet a “recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions”. Why can’t humanity aspire to this rather minimal standard of intelligence and rationality?
Will AIXI replicate itself or procreate?
Likely yes, if AIXI believes that clones or descendants are useful for its own goals.
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn’t build another AIXI, why should we? Because we’re just too dumb?
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn’t figure out how to get as good a result with another design (under real constraints).
That was meant to be rhetorical… I’m hoping that the hypothetical person who’s planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think “Hmm, a Gödel machine is supposed to be smart and it wouldn’t publish its own recipe. Maybe I should give this a second thought.”
If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner’s power and levelling things a little. Such an act may not be irrational—if it is a form of self-defense.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
The fact that they are a secretive monopolist doesn’t bode well, though.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are
usually unknown. From outside, it will likely seem pretty clear
that only a secretive elite having the technology is more likely
to result in a massive wealth and power inequalities than what would
happen if everyone had access. Large wealth and power inequalities
seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
A Gödel machine, if one were to exist, surely wouldn’t do something so blatantly stupid as posting to the Internet a “recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions”. Why can’t humanity aspire to this rather minimal standard of intelligence and rationality?
Similar theme from Hutter’s paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn’t build another AIXI, why should we? Because we’re just too dumb?
I like lines of inquiry like this one and would like it if they showed up more.
I’m not sure what you mean by “lines of inquiry like this one”. Can you explain?
I guess it’s not a natural kind, it just had a few things I like all jammed together compactly:
Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers.
Talks about creation qua creation rather than creation as some implicit kind of self-modification.
Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn’t figure out how to get as good a result with another design (under real constraints).
I’m sure you can come up with several reasons for that.
That was meant to be rhetorical… I’m hoping that the hypothetical person who’s planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think “Hmm, a Gödel machine is supposed to be smart and it wouldn’t publish its own recipe. Maybe I should give this a second thought.”
If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner’s power and levelling things a little. Such an act may not be irrational—if it is a form of self-defense.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
So we estimate based on what we anticipate about the possible state of society.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.