If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?
If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morу useful than Lesswrong?