Yes, this did cause me to take him more seriously than before.
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.