This seems basically correct though it seems worth pointing out that even if we are able to do “Meme part 2” very very well, I expect we will still die because if you optimize hard enough to predict text well, with the right kind of architecture, the system will develop something like general intelligence simply because general intelligence is beneficial for predicting text correctly. E.g. being able to simulate the causal process that generated the text, i.e. the human, is a very complex task that would be useful if performed correctly.
This is an argument Eliezer brought forth in some recent interviews. Seems to me like another meme that would be beneficial to spread more.
This seems basically correct though it seems worth pointing out that even if we are able to do “Meme part 2” very very well, I expect we will still die because if you optimize hard enough to predict text well, with the right kind of architecture, the system will develop something like general intelligence simply because general intelligence is beneficial for predicting text correctly. E.g. being able to simulate the causal process that generated the text, i.e. the human, is a very complex task that would be useful if performed correctly.
This is an argument Eliezer brought forth in some recent interviews. Seems to me like another meme that would be beneficial to spread more.