Well, for example, Eliezer can try to actually invent something technical, most likely fail (most people aren’t very good at inventing), and then cut down his confidence in his predictions about AI. (and especially in intuitions because the dangerous AI is incredibly clever inventor of improvements to itself, and you’d better be a good inventor or your intuitions from internal self observation aren’t worth much). On more meta level they can sit and think—how do we make sure we aren’t mistaken about AI? Where could our intuitions be coming from? Are we doing something useful or have we created a system of irreducible abstractions? etc. Should have been done well before Holden’s post.
edit: i.e. essentially, SI is doing a lot of symbol manipulation type activity to try to think about AI. Those symbols may represent some irreducible flawed concepts, in which case manipulating them won’t be of any use.
Well, for example, Eliezer can try to actually invent something technical, most likely fail (most people aren’t very good at inventing), and then cut down his confidence in his predictions about AI. (and especially in intuitions because the dangerous AI is incredibly clever inventor of improvements to itself, and you’d better be a good inventor or your intuitions from internal self observation aren’t worth much). On more meta level they can sit and think—how do we make sure we aren’t mistaken about AI? Where could our intuitions be coming from? Are we doing something useful or have we created a system of irreducible abstractions? etc. Should have been done well before Holden’s post.
edit: i.e. essentially, SI is doing a lot of symbol manipulation type activity to try to think about AI. Those symbols may represent some irreducible flawed concepts, in which case manipulating them won’t be of any use.