The Riemann hypothesis seems like a special case, since it’s a purely mathematical proposition. A real world problem is more likely to require Eliezer’s brand of FAI.
Also, I believe solving FAI requires solving a problem not on your list, namely that of solving GAI. :-)
If you disagree that (a) looks easier than (b), congratulations, you’ve been successfully brainwashed by Eliezer :-)
OK, that didn’t come across as intended. Edited the post.
A real world problem is more likely to require Eliezer’s brand of FAI.
It seems to me that human engineers don’t spend a lot of time thinking about the value of boredom or the problem of consciousness when they design airplanes. Why should an AI need to do that? If the answer involves “optimizing too hard”, then doesn’t the injunction “don’t optimize too hard” look easier to formalize than CEV?
If the answer involves “optimizing too hard”, then doesn’t the injunction “don’t optimize too hard” look easier to formalize than CEV?
Injecting randomness doesn’t look like a property of reasoning that would stand (or, alternatively, support) self-modification. This leaves the option of limiting self-modification (for the same reason), although given enough time and sanity even a system with low optimization pressure could find a reliable path to improvement.
Superintelligence isn’t a goal in itself. I’ll take super-usefulness over superintelligence any day. I know you want to build superintelligence because otherwise someone else will, but the same reasoning was used to justify nuclear weapons, so I suspect we should be looking for other ways to save the world.
(I see you’ve edited your comment. My reply still applies, I think.)
I know you want to build superintelligence because otherwise someone else will, but the same reasoning was used to justify nuclear weapons, so I suspect we should be looking for other ways to save the world.
Are you arguing that the USA should not have developed nuclear weapons?
Use of nuclear weapons is often credited with shortening the war—and saving many lives—e.g. see here:
Supporters of Truman’s decision to use the bomb argue that it saved hundreds of thousands of lives that would have been lost in an invasion of mainland Japan. In 1954, Eleanor Roosevelt said that Truman had “made the only decision he could,” and that the bomb’s use was necessary “to avoid tremendous sacrifice of American lives.” Others have argued that the use of nuclear weapons was unnecessary and inherently immoral. Truman himself wrote later in life that, “I knew what I was doing when I stopped the war … I have no regrets and, under the same circumstances, I would do it again.”
Well, that was what in fact happened. But what could have happened was perhaps a nuclear war leading to “significant curtailment of humankind’s potential”.
cousin_it’s point was that perhaps we should not even begin the arms race.
Consider the Terminator scenario where they send the terminator back in time to fix things, but this sending back of the terminator is precisely what provided the past with the technology that will eventually lead to the cataclysm in the first place.
I’ll take super-usefulness over superintelligence any day.
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
I know you want to build superintelligence because otherwise someone else will,
Someone will eventually make an intelligence explosion that destroys the world. That would be bad. Any better ideas on how to mitigate the problem?
but the same reasoning was used to justify nuclear weapons
This is an analogy that you use as an argument? As if we don’t already understand the details of the situation a few levels deeper than is covered by the surface similarity here. In making this argument, you appeal to intuition, but individual intuitions (even ones that turn out to be correct in retrospect or on reflection) are unreliable, and we should do better than that, find ways of making explicit reasoning trustworthy.
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
Is this not exactly the point that the cousin it is questioning in the OP? I’d think a “limited” intelligence that was capable of solving the Riemann hypothesis might also be capable of cracking some protein-folding problems or whatever.
If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
If it’s that capable, it’s probably also that dangerous.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
The Riemann hypothesis seems like a special case, since it’s a purely mathematical proposition. A real world problem is more likely to require Eliezer’s brand of FAI.
Also, I believe solving FAI requires solving a problem not on your list, namely that of solving GAI. :-)
This was supposed to be humour, right?
OK, that didn’t come across as intended. Edited the post.
It seems to me that human engineers don’t spend a lot of time thinking about the value of boredom or the problem of consciousness when they design airplanes. Why should an AI need to do that? If the answer involves “optimizing too hard”, then doesn’t the injunction “don’t optimize too hard” look easier to formalize than CEV?
“Don’t optimise for too long” looks easier to formalise. Or so I argued here.
Injecting randomness doesn’t look like a property of reasoning that would stand (or, alternatively, support) self-modification. This leaves the option of limiting self-modification (for the same reason), although given enough time and sanity even a system with low optimization pressure could find a reliable path to improvement.
Superintelligence isn’t a goal in itself. I’ll take super-usefulness over superintelligence any day. I know you want to build superintelligence because otherwise someone else will, but the same reasoning was used to justify nuclear weapons, so I suspect we should be looking for other ways to save the world.
(I see you’ve edited your comment. My reply still applies, I think.)
Are you arguing that the USA should not have developed nuclear weapons?
Use of nuclear weapons is often credited with shortening the war—and saving many lives—e.g. see here:
Well, that was what in fact happened. But what could have happened was perhaps a nuclear war leading to “significant curtailment of humankind’s potential”.
cousin_it’s point was that perhaps we should not even begin the arms race.
Consider the Terminator scenario where they send the terminator back in time to fix things, but this sending back of the terminator is precisely what provided the past with the technology that will eventually lead to the cataclysm in the first place.
EDIT: included Terminator scenario
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
Someone will eventually make an intelligence explosion that destroys the world. That would be bad. Any better ideas on how to mitigate the problem?
This is an analogy that you use as an argument? As if we don’t already understand the details of the situation a few levels deeper than is covered by the surface similarity here. In making this argument, you appeal to intuition, but individual intuitions (even ones that turn out to be correct in retrospect or on reflection) are unreliable, and we should do better than that, find ways of making explicit reasoning trustworthy.
Is this not exactly the point that the cousin it is questioning in the OP? I’d think a “limited” intelligence that was capable of solving the Riemann hypothesis might also be capable of cracking some protein-folding problems or whatever.
If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
Yes, there are non-dangerous useful things, but we were presumably talking about AI capable of open-ended planning.