I think in a scenario where LLMs are already superhuman manipulators your personal decision not to interact with them doesn’t matter at all. My personal copying mechanism for such timelines is the though that dying from rogue AI is so much cooler than from something trivial as old age, infectous diseases and war with other primates.
But such scenarios do not seem likely. Eliezer didn’t see LLMs coming. For this reason his warnings are not exactly on point. LLMs are superhuman in predicting the next token but not in manipulation. They are not powerseeking themselves, though they can probably be a part of a powerseeking entity if arranged in the right pattern. But then we will be able to easily read the mind of such entity. P(Doom) is in decades of percents but probably less than 50%
I think in a scenario where LLMs are already superhuman manipulators your personal decision not to interact with them doesn’t matter at all. My personal copying mechanism for such timelines is the though that dying from rogue AI is so much cooler than from something trivial as old age, infectous diseases and war with other primates.
But such scenarios do not seem likely. Eliezer didn’t see LLMs coming. For this reason his warnings are not exactly on point. LLMs are superhuman in predicting the next token but not in manipulation. They are not powerseeking themselves, though they can probably be a part of a powerseeking entity if arranged in the right pattern. But then we will be able to easily read the mind of such entity. P(Doom) is in decades of percents but probably less than 50%