The ‘evolutionary pressures’ being discussed by CGP Grey is not the direct gradient descent used to train an individual model. Instead, he is referring to the whole set of incentives we as a society put on AI models. Similar to memes—there is no gradient descent on memes.
(Apologies if you already understood this, but it seems your post and Steven Byrne’s post focus on training of individual models)
Fair enough on that difference between the societial level incentives on AI models and the individual selection incentives on AI models.
My main current response is to say that I think the incentives are fairly weak predictors of the variance in outcomes, compared to non-evolutionary forces at this time.
However, I do think this has interesting consequences for AI governance (since one of the effects is to make societal level incentives become more relevant, compared to non-evolutionary forces.)
The ‘evolutionary pressures’ being discussed by CGP Grey is not the direct gradient descent used to train an individual model. Instead, he is referring to the whole set of incentives we as a society put on AI models. Similar to memes—there is no gradient descent on memes.
(Apologies if you already understood this, but it seems your post and Steven Byrne’s post focus on training of individual models)
Fair enough on that difference between the societial level incentives on AI models and the individual selection incentives on AI models.
My main current response is to say that I think the incentives are fairly weak predictors of the variance in outcomes, compared to non-evolutionary forces at this time.
However, I do think this has interesting consequences for AI governance (since one of the effects is to make societal level incentives become more relevant, compared to non-evolutionary forces.)