It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.
A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea.
A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it’s too late.
Either position would have a substantive position that you don’t need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.
Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can’t really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don’t act before X, the world blows up—obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration—if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can’t afford to weigh whether the idea would result in a worse situation somehow—beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement “We need to balance the risks and opportunities of AI”; just that they can sincerely say it, and even argue for it. This was a response to you saying that “no one would ever utter the phrase in the first place”. This just strikes me as false.
Never underestimate the power of human stupidity ;)
It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.
A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it’s too late.
Either position would have a substantive position that you don’t need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.
Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can’t really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don’t act before X, the world blows up—obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration—if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can’t afford to weigh whether the idea would result in a worse situation somehow—beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement “We need to balance the risks and opportunities of AI”; just that they can sincerely say it, and even argue for it. This was a response to you saying that “no one would ever utter the phrase in the first place”. This just strikes me as false.
Never underestimate the power of human stupidity ;)
You’re right, in that regard I was certainly mistaken.
Upvoted for the “oops” moment.