I’ll take super-usefulness over superintelligence any day.
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
I know you want to build superintelligence because otherwise someone else will,
Someone will eventually make an intelligence explosion that destroys the world. That would be bad. Any better ideas on how to mitigate the problem?
but the same reasoning was used to justify nuclear weapons
This is an analogy that you use as an argument? As if we don’t already understand the details of the situation a few levels deeper than is covered by the surface similarity here. In making this argument, you appeal to intuition, but individual intuitions (even ones that turn out to be correct in retrospect or on reflection) are unreliable, and we should do better than that, find ways of making explicit reasoning trustworthy.
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
Is this not exactly the point that the cousin it is questioning in the OP? I’d think a “limited” intelligence that was capable of solving the Riemann hypothesis might also be capable of cracking some protein-folding problems or whatever.
If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
If it’s that capable, it’s probably also that dangerous.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
Of course. But super-usefulness unfortunately requires superintelligence, and superintelligence is super-dangerous. Limited intelligence gives only limited usefulness, and in the long run even limited intelligence would tend to improve its capability, so it’s not reliably safe. And not very useful.
Someone will eventually make an intelligence explosion that destroys the world. That would be bad. Any better ideas on how to mitigate the problem?
This is an analogy that you use as an argument? As if we don’t already understand the details of the situation a few levels deeper than is covered by the surface similarity here. In making this argument, you appeal to intuition, but individual intuitions (even ones that turn out to be correct in retrospect or on reflection) are unreliable, and we should do better than that, find ways of making explicit reasoning trustworthy.
Is this not exactly the point that the cousin it is questioning in the OP? I’d think a “limited” intelligence that was capable of solving the Riemann hypothesis might also be capable of cracking some protein-folding problems or whatever.
If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
Yes, there are non-dangerous useful things, but we were presumably talking about AI capable of open-ended planning.