Substitute your AI statement with energy use, synthetic chemistry, gene editing, nuclear weapons, aviation, and a zillion other things and it’s the same claim.
Part of the pattern match here has nothing to do with “making a machine smarter than yourself.”
Each of the above has a 2 bit sound byte that makes them sound really bad.
But more directly, the issue is that none of these things ended up working as well as people feared. What if it was so easy to generate energy with fusion we started heating the water at beaches? Gene edits, see dystopian sci Fi for what happens when easy strength and intelligence edits are common, or some super race gets created. What if nuclear fusion bombs didn’t need a fission trigger and could be made with common materials? What if the skies had so many flying cars the your lawn is littered with debris?
So the easiest pattern match is to just say that ai won’t work as well as people fear. That if intelligence increases with the log of compute, it might mean AI stops improving significantly either at subhuman levels, modestly superhuman, or most realistically, some mix of both.
Also for the specific reference class “AI”, the prior history has been repeated disappointment from algorithms that appeared to work well in some cases.
Someone 88 years old who went to Dartmouth as an undergrad would have seen all of the disappointments, from the first hype, and would likely be the most skeptical now is different.
What evidence would be sufficient that a rational person would change their views to believe AI was a more likely than not to be a threat this time? What would the “trinity test” be for AI?
Arguments that sound convincing are not good evidence. How can a person distinguish between Bostrum and a religion or fear mongering about the other classes of technology above? {Zvi, Bostrum, religion, luddism} all make credible sounding arguments. At least 2 of those categories are just emitting evidence free bullshit.
You need empirical measurements. Doesn’t Zvi say a claim without evidence can be dismissed without evidence?
Part of the pattern match here has nothing to do with “making a machine smarter than yourself.”
Each of the above has a 2 bit sound byte that makes them sound really bad.
But more directly, the issue is that none of these things ended up working as well as people feared. What if it was so easy to generate energy with fusion we started heating the water at beaches? Gene edits, see dystopian sci Fi for what happens when easy strength and intelligence edits are common, or some super race gets created. What if nuclear fusion bombs didn’t need a fission trigger and could be made with common materials? What if the skies had so many flying cars the your lawn is littered with debris?
So the easiest pattern match is to just say that ai won’t work as well as people fear. That if intelligence increases with the log of compute, it might mean AI stops improving significantly either at subhuman levels, modestly superhuman, or most realistically, some mix of both.
Also for the specific reference class “AI”, the prior history has been repeated disappointment from algorithms that appeared to work well in some cases. Someone 88 years old who went to Dartmouth as an undergrad would have seen all of the disappointments, from the first hype, and would likely be the most skeptical now is different.
https://en.m.wikipedia.org/wiki/Dartmouth_workshop
What evidence would be sufficient that a rational person would change their views to believe AI was a more likely than not to be a threat this time? What would the “trinity test” be for AI?
Arguments that sound convincing are not good evidence. How can a person distinguish between Bostrum and a religion or fear mongering about the other classes of technology above? {Zvi, Bostrum, religion, luddism} all make credible sounding arguments. At least 2 of those categories are just emitting evidence free bullshit.
You need empirical measurements. Doesn’t Zvi say a claim without evidence can be dismissed without evidence?