If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
If it’s that capable, it’s probably also that dangerous.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
If it’s that capable, it’s probably also that dangerous. But at this point the only way to figure out more about how it actually is, is to consider specific object-level questions about a proposed design. Absent design, all we can do is vaguely guess.
No. We already have computers that help design better airplanes etc., and they are not dangerous at all. Sewing-Machine’s question is right on.
Building machines that help us solve intelligence-bound problems (even if these problems are related to the real world, like building better airplanes) seems to be massively easier than building machines that will “understand” the existence of the real world and try to take it over for whatever reason. Evidence: we have had much success with the former task, but practically no progress on the latter. Moreover, the latter task looks very dangerous, kinda like nuclear weaponry.
Why do some people become so enamored with the singleton scenario that they can’t settle for anything less? What’s wrong with humans using “smart enough” machines to solve world hunger and such, working out any ethical issues along the way, instead of delegating the whole task to one big AI? If you think you need the singleton to protect you from some danger, what can be more dangerous than a singleton?
It’s potentially dangerous, given the uncertainty about what exactly you are talking about. If it’s not dangerous, go for it.
Settling for something less than a singleton won’t solve the problem of human-indifferent intelligence explosion.
Another singleton, which is part of the danger in question.
There are already computer programs that have solved open problems, e.g. That was a much simpler and less interesting question than the Riemann Hypothesis, but I don’t know that it’s fundamentally different or less dangerous than what cousin it is proposing.
Yes, there are non-dangerous useful things, but we were presumably talking about AI capable of open-ended planning.