This seems like an unusual misreading of Eliezer’s post, which is quite explicitly about the potential bounds of future systems’ performance, and not about the performance of the current system. There is no implication that the current system is superhuman (or even average-human) in the dimensions that you specified.
They sound more like fantasy bounds than ‘potential’ simply because there isn’t 1000x or 10000x more training data in existence for such a future system to train on. (Nor are there any likely pathways for this to occur, other than training on the outputs of prior models)
I understood that. I guess I should have been more explicit about my belief that the amount of training data that would result in training a viable universal simulator would be “all of the text ever created”, and then several orders of magnitude more.
This seems like an unusual misreading of Eliezer’s post, which is quite explicitly about the potential bounds of future systems’ performance, and not about the performance of the current system. There is no implication that the current system is superhuman (or even average-human) in the dimensions that you specified.
They sound more like fantasy bounds than ‘potential’ simply because there isn’t 1000x or 10000x more training data in existence for such a future system to train on. (Nor are there any likely pathways for this to occur, other than training on the outputs of prior models)
I understood that. I guess I should have been more explicit about my belief that the amount of training data that would result in training a viable universal simulator would be “all of the text ever created”, and then several orders of magnitude more.