To follow up, this might have big implications for understanding AGI. First of all, it’s possible that we’ll build AGIs that aren’t like that and that do have final goals in the traditional sense—e.g. because they are a hybrid of neural nets and ordinary software, involving explicit tree search maybe, or because SGD is more powerful at coherentizing the neural net’s goals than whatever goes on in the brain. If so, then we’ll really be dealing with a completely different kind of being than humans, I think.
To follow up, this might have big implications for understanding AGI. First of all, it’s possible that we’ll build AGIs that aren’t like that and that do have final goals in the traditional sense—e.g. because they are a hybrid of neural nets and ordinary software, involving explicit tree search maybe, or because SGD is more powerful at coherentizing the neural net’s goals than whatever goes on in the brain. If so, then we’ll really be dealing with a completely different kind of being than humans, I think.
Secondly, well, I discussed this three years ago in this post What if memes are common in highly capable minds? — LessWrong