Strong agree. To pile on a bit, I think I’m confused about what Eliezer is imagining when he imagines the content of those 7.5MB.
I know what I’m imagining is in those 7.5MB: The within-lifetime learning part has several learning algorithms (and corresponding inference algorithms), neural network architectures, and (space- and time-dependent) hyperparameters. And the other part is calculating the reward function, calculating various other loss functions, and doing lots of odds and ends like regulating heart rate and executing various other innate reactions and reflexes. So for me, these are 7.5MB of more-or-less the same kinds of things that AI & ML people are used to putting into their GitHub repositories.
By contrast, Eliezer is imagining… I’m not sure. That evolution is kinda akin to pretraining, and the 7.5MB are more-or-less specifying millions of individual weights? That I went wrong by even mentioning learning algorithms in the first place? Something else??
Strong agree. To pile on a bit, I think I’m confused about what Eliezer is imagining when he imagines the content of those 7.5MB.
I know what I’m imagining is in those 7.5MB: The within-lifetime learning part has several learning algorithms (and corresponding inference algorithms), neural network architectures, and (space- and time-dependent) hyperparameters. And the other part is calculating the reward function, calculating various other loss functions, and doing lots of odds and ends like regulating heart rate and executing various other innate reactions and reflexes. So for me, these are 7.5MB of more-or-less the same kinds of things that AI & ML people are used to putting into their GitHub repositories.
By contrast, Eliezer is imagining… I’m not sure. That evolution is kinda akin to pretraining, and the 7.5MB are more-or-less specifying millions of individual weights? That I went wrong by even mentioning learning algorithms in the first place? Something else??