But more generally speaking, AI-kill-all-scenarios boil down to the possibility of any other anthropogenic existential risks. If grey goo is possible, AI turns into nanobots. If multipandemic is possible, AI helps to design viruses. If nuclear war + military robots (Terminator scenario) can kill everybody, AI is here to help it works smooth.
Removing the scenario really annoys me. Whether it’s novel or not, and whether it’s likely or not, it seems VANISHINGLY unlikely that posting it makes it more likely, rather than less (or neutral). The exception would be if it’s revealing insider knowledge or secret/classified information, and in that case you should probably just delete it without comment rather than SAYING there’s something to investigate.
I got scolded in a different post by the LW moderators by saying that there is a policy of not brainstorming about different ways to end the world because it is considered an info hazard. I think this makes sense and we should be careful doing that
I think we should not discuss the details here in the open, so I am more than happy to keep the conversation in private if you fancy. For the public record, I find this scenario very unlikely too
[scenario removed]
But more generally speaking, AI-kill-all-scenarios boil down to the possibility of any other anthropogenic existential risks. If grey goo is possible, AI turns into nanobots. If multipandemic is possible, AI helps to design viruses. If nuclear war + military robots (Terminator scenario) can kill everybody, AI is here to help it works smooth.
Removing the scenario really annoys me. Whether it’s novel or not, and whether it’s likely or not, it seems VANISHINGLY unlikely that posting it makes it more likely, rather than less (or neutral). The exception would be if it’s revealing insider knowledge or secret/classified information, and in that case you should probably just delete it without comment rather than SAYING there’s something to investigate.
You don’t have to say the scenario, but was it removed because someone is going to execute it if they see it?
I got scolded in a different post by the LW moderators by saying that there is a policy of not brainstorming about different ways to end the world because it is considered an info hazard. I think this makes sense and we should be careful doing that
I think we should not discuss the details here in the open, so I am more than happy to keep the conversation in private if you fancy. For the public record, I find this scenario very unlikely too
Do you think any anthropogenic human extinction risks are possible at all?
In 20 years time? No, I don’t think so. We can make a bet if you want
I will delete my comment, but there are even more plausible ideas in that direction.