Are there other forums for AI Alignment or AI Safety and Security besides this one where your article could be published for feedback from perspectives that haven’t been shaped by Rationalist thinking or EA?
Are there other forums for AI Alignment or AI Safety and Security besides this one where your article could be published for feedback from perspectives that haven’t been shaped by Rationalist thinking or EA?