There’s a scarcity of stories about how things could go wrong with AI which are not centered on the “single advanced misaligned research project” scenario. This post (and the mentioned RAAP post by Critch) helps partially fill that gap.
It definitely helped me picture / feel some of what some potential worlds look like, to the degree I currently think something like this—albeit probably slower, as mentioned in the story—is more likely than the misaligned research project disaster.
It also is a (1) pretty good / fun story and (2) mentions the elements within the story which the author feels are unlikely, which is virtuous and helps prevent higher detail from being mistaken for plausibility.
There’s a scarcity of stories about how things could go wrong with AI which are not centered on the “single advanced misaligned research project” scenario. This post (and the mentioned RAAP post by Critch) helps partially fill that gap.
It definitely helped me picture / feel some of what some potential worlds look like, to the degree I currently think something like this—albeit probably slower, as mentioned in the story—is more likely than the misaligned research project disaster.
It also is a (1) pretty good / fun story and (2) mentions the elements within the story which the author feels are unlikely, which is virtuous and helps prevent higher detail from being mistaken for plausibility.