Who a decade ago thought that AI would think symbolically? I’m struggling to think of anyone. There was a debate on LW though, around “cleanly designed” versus “heuristics based” AIs, as to which might come first and which one safety efforts should be focused around. (This was my contribution to it.)
If someone had followed this discussion, there would be no need for dramatic updates / admissions of wrongness, just smoothly (more or less) changing one’s credences as subsequent observations came in, perhaps becoming increasingly pessimistic if one’s hope for AI safety mainly rested on actual AIs being “cleanly designed” (as Eliezer’s did). (I guess I’m a bit peeved that you single out an example of “dramatic update” for praise, while not mentioning people who had appropriate uncertainty all along and updated constantly.)
Who a decade ago thought that AI would think symbolically? I’m struggling to think of anyone. There was a debate on LW though, around “cleanly designed” versus “heuristics based” AIs, as to which might come first and which one safety efforts should be focused around. (This was my contribution to it.)
If someone had followed this discussion, there would be no need for dramatic updates / admissions of wrongness, just smoothly (more or less) changing one’s credences as subsequent observations came in, perhaps becoming increasingly pessimistic if one’s hope for AI safety mainly rested on actual AIs being “cleanly designed” (as Eliezer’s did). (I guess I’m a bit peeved that you single out an example of “dramatic update” for praise, while not mentioning people who had appropriate uncertainty all along and updated constantly.)