I haven’t heard of it either, but I am not too surprised by that. Since Superintelligence came out I hear about organizations like this every few months, with no one I know working on it, without any visible output and usually disappearing after a year or two. Not fully sure what the point of them is. Maybe they do some valuable work completely behind the scenes, or maybe it’s just people trying to sell people something by jumping onto the AI Safety bandwagon, or maybe it’s just some enthusiastic individuals being excited about AI Alignment and wanting to somehow prevent a gap in their resume while they spend a bunch of time thinking about it.
As far as visible output, the founder did write a (misleading imho) fictional book about AI risk called “Detonation”, which is how I heard of Ethagi. I was curious how an organization like this could form with no connection to “mainstream” AI safety people, but I guess it’s more common than I thought
I haven’t heard of it either, but I am not too surprised by that. Since Superintelligence came out I hear about organizations like this every few months, with no one I know working on it, without any visible output and usually disappearing after a year or two. Not fully sure what the point of them is. Maybe they do some valuable work completely behind the scenes, or maybe it’s just people trying to sell people something by jumping onto the AI Safety bandwagon, or maybe it’s just some enthusiastic individuals being excited about AI Alignment and wanting to somehow prevent a gap in their resume while they spend a bunch of time thinking about it.
As far as visible output, the founder did write a (misleading imho) fictional book about AI risk called “Detonation”, which is how I heard of Ethagi. I was curious how an organization like this could form with no connection to “mainstream” AI safety people, but I guess it’s more common than I thought