Your general argument rings true to my ears—except the part about AI safety. It is very hard to interact with AI safety without entering the x-risk sphere, as shown by this piece of research by the Cosmos Institute, where the x-risk sphere is almost 2/3rds total funding (I have some doubts about the accounting). Your argument about Mustafa Suleyman strikes me as a “just-so” story—I do wish it were replicable, but I would be surprised, particularly with AI safety’s sense of urgency.
I’m here because truly there is no better place, and I mean that in both a praiseworthy and an upsetting sense. If you think it’s misguided, we, on the same side, need to show the strength of our alternative, don’t we?
Your general argument rings true to my ears—except the part about AI safety. It is very hard to interact with AI safety without entering the x-risk sphere, as shown by this piece of research by the Cosmos Institute, where the x-risk sphere is almost 2/3rds total funding (I have some doubts about the accounting). Your argument about Mustafa Suleyman strikes me as a “just-so” story—I do wish it were replicable, but I would be surprised, particularly with AI safety’s sense of urgency.
I’m here because truly there is no better place, and I mean that in both a praiseworthy and an upsetting sense. If you think it’s misguided, we, on the same side, need to show the strength of our alternative, don’t we?