AI safety goes about interfering with these applications
Yes definitely don’t do this. Perish the thought. That’s not what AI safety is about.
I think it’s better to know about these dynamics when forming a world model, and potentially very dangerous to not know i.e. because then they will be invisible helicopter blades that you can just walk right into. I’m aware that the tradeoffs for researching this kind of thing is complicated.
It’s also a good idea to increase readership of The Sequences, HPMOR, the codex, Raemon’s rationality paradigm when it’s ready, that will make people depart from being the kinds of targets that these systems are built for. Getting people off social media would also be a big win, of course.
Yes definitely don’t do this. Perish the thought. That’s not what AI safety is about.
I think it’s better to know about these dynamics when forming a world model, and potentially very dangerous to not know i.e. because then they will be invisible helicopter blades that you can just walk right into. I’m aware that the tradeoffs for researching this kind of thing is complicated.
It’s also a good idea to increase readership of The Sequences, HPMOR, the codex, Raemon’s rationality paradigm when it’s ready, that will make people depart from being the kinds of targets that these systems are built for. Getting people off social media would also be a big win, of course.