Is there a minimal version of the AI risk arguments that are disentangled from these things?
Yes. I’m one of those transhumanist people, but you can talk about AI risk completely adjacent from that. Tryna write up something that compiles the other arguments.
Yes. I’m one of those transhumanist people, but you can talk about AI risk completely adjacent from that. Tryna write up something that compiles the other arguments.