Mostly, these don’t work very well. The current capabilities paradigm is state of the art because it gives the best results of anything we’ve tried so far, despite lots of effort to find better paradigms.
To be fair though, just like neural nets didn’t work until we had enough compute to make them really big. Some of these approaches might not work very well now, but maybe they will work better than the alternatives when you apply them at sufficient scale.
Yep, this is definitely one of his weakest points, and I’d like a more discussion in a different post about how the optimism arguments generalize.
Partially, it’s because I suspect at least some of the arguments do generalize, but also I’d want to rely less on the assumption that future AIs will be LLM-like.
To be fair though, just like neural nets didn’t work until we had enough compute to make them really big. Some of these approaches might not work very well now, but maybe they will work better than the alternatives when you apply them at sufficient scale.
Yep, this is definitely one of his weakest points, and I’d like a more discussion in a different post about how the optimism arguments generalize.
Partially, it’s because I suspect at least some of the arguments do generalize, but also I’d want to rely less on the assumption that future AIs will be LLM-like.