Well, firstly its good that the crowd is savy, but it might still be wise to prepare for strawman/fleshman attacks as well as steelmanned ones.
These are some more plausible criticisms:
(1) Moore’s law seems to be slowing—this could be a speedbump before the next paradigm takes over, or it could be the start of stagnation, in which case the singularity is postponed. Of course, if humanity survives the singularity will happen eventually anyway, but if it is hundreds of years in the future it would probably be wiser focussing on promoting rationality/genetic engineering/other methods of improving biological intelligence as well as cryonics in the short term, and leaving work on FAI to future generations.
(2) It could be argued that FAI and perhaps de novo AGI as well is simply so hard we will never get it done in time. Eventually neuromorphic AI/WBE/ brute force evolutionary simulations will be developed (assuming that exponential progress in these fields holds) and we would be better of preparing for this case, perhaps by developing empathic neuromorphic AI, or developing a framework for uploads and humans to live without a Malthusian race to the bottom.
(3) The budget/number of people involved of MIRI is tiny compared to google and other entities which could plausibly design AI. Therefore many would argue that MIRI cannot develop AI first, and instead should focus on outreach towards other, larger, groups.
(4) Gwern seems to be going further, and arguing that we should advocate that nations should suppress technology.
In all of these cases, some sort of rationality outreach would seem to be the alternative, so you could still spin that as a positive.
(1) Moore’s law seems to be slowing—this could be a speedbump before the next paradigm takes over, or it could be the start of stagnation, in which case the singularity is postponed.
The pithy one-liner comeback to this is that the human brain is an existence proof for a computer the size of the human brain with the performance of the human brain, and it seems implausible that nature arrived at the optimal basic design for neurons on (basically) its first try.
An existence proof is very different from a constructive proof! Nature did not happen upon this design on the first try, the brain has evolved for billions of generations. Of course, intelligence can work faster than the blind idiot god, and humanity, if it survives long enough, will do better. The question is, will this take decades or centuries?
An existence proof is very different from a constructive proof!
Quite so. However, it does give reason to hope.
The question is, will this take decades or centuries?
If you look at Moore’s Law coming to a close in silicon around 2020, and we’re still so far away from a human brain equivalent computer, it’s easy to get disheartened. I think it’s important to remember that it’s at least possible, and if nature could happen upon it..
Well, firstly its good that the crowd is savy, but it might still be wise to prepare for strawman/fleshman attacks as well as steelmanned ones.
These are some more plausible criticisms:
(1) Moore’s law seems to be slowing—this could be a speedbump before the next paradigm takes over, or it could be the start of stagnation, in which case the singularity is postponed. Of course, if humanity survives the singularity will happen eventually anyway, but if it is hundreds of years in the future it would probably be wiser focussing on promoting rationality/genetic engineering/other methods of improving biological intelligence as well as cryonics in the short term, and leaving work on FAI to future generations.
(2) It could be argued that FAI and perhaps de novo AGI as well is simply so hard we will never get it done in time. Eventually neuromorphic AI/WBE/ brute force evolutionary simulations will be developed (assuming that exponential progress in these fields holds) and we would be better of preparing for this case, perhaps by developing empathic neuromorphic AI, or developing a framework for uploads and humans to live without a Malthusian race to the bottom.
(3) The budget/number of people involved of MIRI is tiny compared to google and other entities which could plausibly design AI. Therefore many would argue that MIRI cannot develop AI first, and instead should focus on outreach towards other, larger, groups.
(4) Gwern seems to be going further, and arguing that we should advocate that nations should suppress technology.
In all of these cases, some sort of rationality outreach would seem to be the alternative, so you could still spin that as a positive.
The pithy one-liner comeback to this is that the human brain is an existence proof for a computer the size of the human brain with the performance of the human brain, and it seems implausible that nature arrived at the optimal basic design for neurons on (basically) its first try.
An existence proof is very different from a constructive proof! Nature did not happen upon this design on the first try, the brain has evolved for billions of generations. Of course, intelligence can work faster than the blind idiot god, and humanity, if it survives long enough, will do better. The question is, will this take decades or centuries?
Quite so. However, it does give reason to hope.
If you look at Moore’s Law coming to a close in silicon around 2020, and we’re still so far away from a human brain equivalent computer, it’s easy to get disheartened. I think it’s important to remember that it’s at least possible, and if nature could happen upon it..