As far as visible output, the founder did write a (misleading imho) fictional book about AI risk called “Detonation”, which is how I heard of Ethagi. I was curious how an organization like this could form with no connection to “mainstream” AI safety people, but I guess it’s more common than I thought
As far as visible output, the founder did write a (misleading imho) fictional book about AI risk called “Detonation”, which is how I heard of Ethagi. I was curious how an organization like this could form with no connection to “mainstream” AI safety people, but I guess it’s more common than I thought