Indeed. I feel the absence of good counter-arguments was a more useful indication than their eventual agreement.
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
If you are sufficiently vague, say “smarter than human intelligence is conceivable and might pose a danger”, it is only reasonable to anticipate counter-arguments from a handful of people like Roger Penrose.
If however you say that “1) it is likely that 2) we will create artificial general intelligence within this century that is 3) likely to undergo explosive recursive self-improvement, respectively become superhuman intelligent, 4) in a short enough time-frame to be uncontrollable, 5) to take over the universe in order to pursue its goals, 6) ignore 7) and thereby destroy all human values” and that “8) it is important to contribute money to save the world, 9) at this point in time, 10) by figuring out how to make such hypothetical AGI’s provably friendly and 11) that the Singularity Institute, respectively the Future of Humanity Institute, are the right organisations for this job”, then you can expect to hear counter-arguments.
If you weaken the odds of creating general intelligence to around 50-50, then virtually none have given decent counterarguments to 1)-7). The disconnect starts at 8)-11).
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
Quite strong evidence, at least for my position (which has somewhat wider error bars that SIAI’s). Most people who have thought about this at length tend to agree with me, and most arguments presented against it are laughably weak (hell, the best arguments against Whole Brain Emulations were presented by Anders Sandberg, an advocate of WBE).
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise (I feel I should add, in the interest of fairness, that neuroscientists seemed to put much lower probabilities on AGI ever happening in the first place).
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
“risky enough to bother about” could be interpreted as:
(in ascending order of importance)
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every 10 years.
There should be a study group whose sole purpose is to think about the issue.
All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise...
Could you elaborate on the “relevant expertise” that is necessary to agree with you?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI? Even those who are likely aware of all the relevant arguments. And what do you think which arguments the others are missing that would likely make them change their mind about the issue?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
As a species, we suck at prevention, and we suck especially at preventing things that have never happened before, and we suck especially especially at preventing things that don’t come from a clear enemy.
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
I have my doubts that if I would have written the relevant researchers about nuclear winter they would have told me that it is a fringe issue. Probably a lot would have told me that they can’t write about it in the midst of the cold war.
I also have my doubts that biologists would tell me that they think that the issue of risks from synthetic biology is just bunkers. Although quite a few would probably tell me that the risks are exaggerated.
Regarding the murder vs. asteroid funding. I am not sure that it was very irrational, in retrospect, to avoid asteroid funding until now. The additional amount of resources it would have taken to scan for asteroids a few decades ago versus now might outweigh the few decades in which nobody looked for possible asteroids on a collision course with earth. But I don’t have any data to back this up.
Oh yes, and I forgot one common answer, which generally means I need pay no more attention to their arguments, and can shift into pure convincing mode: “Since the risks are uncertain, we don’t need to worry.”
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
If you are sufficiently vague, say “smarter than human intelligence is conceivable and might pose a danger”, it is only reasonable to anticipate counter-arguments from a handful of people like Roger Penrose.
If however you say that “1) it is likely that 2) we will create artificial general intelligence within this century that is 3) likely to undergo explosive recursive self-improvement, respectively become superhuman intelligent, 4) in a short enough time-frame to be uncontrollable, 5) to take over the universe in order to pursue its goals, 6) ignore 7) and thereby destroy all human values” and that “8) it is important to contribute money to save the world, 9) at this point in time, 10) by figuring out how to make such hypothetical AGI’s provably friendly and 11) that the Singularity Institute, respectively the Future of Humanity Institute, are the right organisations for this job”, then you can expect to hear counter-arguments.
If you weaken the odds of creating general intelligence to around 50-50, then virtually none have given decent counterarguments to 1)-7). The disconnect starts at 8)-11).
Quite strong evidence, at least for my position (which has somewhat wider error bars that SIAI’s). Most people who have thought about this at length tend to agree with me, and most arguments presented against it are laughably weak (hell, the best arguments against Whole Brain Emulations were presented by Anders Sandberg, an advocate of WBE).
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise (I feel I should add, in the interest of fairness, that neuroscientists seemed to put much lower probabilities on AGI ever happening in the first place).
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
“risky enough to bother about” could be interpreted as:
(in ascending order of importance)
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every 10 years.
There should be a study group whose sole purpose is to think about the issue.
All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Could you elaborate on the “relevant expertise” that is necessary to agree with you?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI? Even those who are likely aware of all the relevant arguments. And what do you think which arguments the others are missing that would likely make them change their mind about the issue?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
As a species, we suck at prevention, and we suck especially at preventing things that have never happened before, and we suck especially especially at preventing things that don’t come from a clear enemy.
I have my doubts that if I would have written the relevant researchers about nuclear winter they would have told me that it is a fringe issue. Probably a lot would have told me that they can’t write about it in the midst of the cold war.
I also have my doubts that biologists would tell me that they think that the issue of risks from synthetic biology is just bunkers. Although quite a few would probably tell me that the risks are exaggerated.
Regarding the murder vs. asteroid funding. I am not sure that it was very irrational, in retrospect, to avoid asteroid funding until now. The additional amount of resources it would have taken to scan for asteroids a few decades ago versus now might outweigh the few decades in which nobody looked for possible asteroids on a collision course with earth. But I don’t have any data to back this up.
Oh yes, and I forgot one common answer, which generally means I need pay no more attention to their arguments, and can shift into pure convincing mode: “Since the risks are uncertain, we don’t need to worry.”