I should ask this question now rather than later: Is there a concrete policy alternative being considered by you?
Every AGI researcher is unconvinced by that, about their own work.
And on one obvious ‘outside view’, they’d be right—it’s a very strange and unusual situation, which took me years to acknowledge, that this one particular class of science research could have perverse results. There’s many attempted good deeds which have no effect, but complete backfires make the news because they’re rare.
(Hey, maybe the priors in favor of good outcomes from the broad reference class of scientific research are so high that we should just ignore the inside view which says that AGI research will have a different result!)
And even AGI research doesn’t end up making it less likely that AGI will be developed, please note—it’s not that perverse in its outcome.
Is there a concrete policy alternative being considered by you?
I’m currently in favor of of the following:
research on strategies for navigating intelligence explosion (what I called “Singularity Strategies”)
pushing for human intelligence enhancement
pushing for a government to try to take an insurmountable tech lead via large scale intelligence enhancement
research into a subset of FAI-related problems that do not shorten AI timelines (at least as far as we can tell), such as consciousness, normative ethics, metaethics, metaphilosophy
advocacy/PR/academic outreach on the dangers of AGI progress
There’s many attempted good deeds which have no effect, but complete backfires make the news because they’re rare.
What about continuing physics research possibly leading to a physics disaster or new superweapons, biotech research leading to biotech disasters, nanotech research leading to nanotech disasters, WBE research leading to value drift and Malthusian outcomes, computing hardware research leading to deliberate or accidental creation of massive simulated suffering (aside from UFAI)? In addition, I thought you believed that faster economic growth made a good outcome less likely, which would imply that most scientific research is bad?
And even AGI research doesn’t end up making it less likely that AGI will be developed, please note—it’s not that perverse in its outcome.
Many AGI researchers seem to think that their research will result in a benevolent AGI, and I’m assuming you agree that their research does make it less likely that such an AGI will be eventually developed.
It seems odd to insist that someone explicitly working on benevolence should consider themselves to be in the same reference class as someone who thinks they just need to take care of the AGI and the benevolence will pretty much take care of itself.
I wasn’t intending to use “AGI researchers” as a reference class to show that Eliezer’s work is likely to have net negative consequences, but to show that people whose work can reasonably be expected to have net negative consequences (of whom AGI researchers is a prominent class) still tend not to believe such claims, and therefore Eliezer’s failure to be convinced is not of much evidential value to others.
The reference class I usually do have in mind when I think of Eliezer is philosophers who think they have the right answer to some philosophical problem (virtually all of whom end up being wrong or at least incomplete even if they are headed in the right direction).
ETA: I’ve written a post that expands on this comment.
I should ask this question now rather than later: Is there a concrete policy alternative being considered by you?
And on one obvious ‘outside view’, they’d be right—it’s a very strange and unusual situation, which took me years to acknowledge, that this one particular class of science research could have perverse results. There’s many attempted good deeds which have no effect, but complete backfires make the news because they’re rare.
(Hey, maybe the priors in favor of good outcomes from the broad reference class of scientific research are so high that we should just ignore the inside view which says that AGI research will have a different result!)
And even AGI research doesn’t end up making it less likely that AGI will be developed, please note—it’s not that perverse in its outcome.
I’m currently in favor of of the following:
research on strategies for navigating intelligence explosion (what I called “Singularity Strategies”)
pushing for human intelligence enhancement
pushing for a government to try to take an insurmountable tech lead via large scale intelligence enhancement
research into a subset of FAI-related problems that do not shorten AI timelines (at least as far as we can tell), such as consciousness, normative ethics, metaethics, metaphilosophy
advocacy/PR/academic outreach on the dangers of AGI progress
What about continuing physics research possibly leading to a physics disaster or new superweapons, biotech research leading to biotech disasters, nanotech research leading to nanotech disasters, WBE research leading to value drift and Malthusian outcomes, computing hardware research leading to deliberate or accidental creation of massive simulated suffering (aside from UFAI)? In addition, I thought you believed that faster economic growth made a good outcome less likely, which would imply that most scientific research is bad?
Many AGI researchers seem to think that their research will result in a benevolent AGI, and I’m assuming you agree that their research does make it less likely that such an AGI will be eventually developed.
It seems odd to insist that someone explicitly working on benevolence should consider themselves to be in the same reference class as someone who thinks they just need to take care of the AGI and the benevolence will pretty much take care of itself.
I wasn’t intending to use “AGI researchers” as a reference class to show that Eliezer’s work is likely to have net negative consequences, but to show that people whose work can reasonably be expected to have net negative consequences (of whom AGI researchers is a prominent class) still tend not to believe such claims, and therefore Eliezer’s failure to be convinced is not of much evidential value to others.
The reference class I usually do have in mind when I think of Eliezer is philosophers who think they have the right answer to some philosophical problem (virtually all of whom end up being wrong or at least incomplete even if they are headed in the right direction).
ETA: I’ve written a post that expands on this comment.