Maybe if AGI timelines were still > 100 years following this idea might have some merit, but we don’t have enough time left for ideas that would need multiple decades to see positive results, especially given that it’s unlikely we would get that many von Neumann/einstein-tier biological intelligence for research
Deii
I do believe the “eurocentric” argument is a manifestation of moloch, it is the new version of “x is the next hitler” or “y was done by the nazis”, it can be used to dismiss any argument coming from the west and to justify almost anything, for example it could be used by china or any latin american country putting an AGI in the government by saying: “AI safety is an eurocentric concept made to perpetuate western hegemony”
So as a rule of thumb, I refuse to giving anyone saying that the benefit of the doubt, in my model anyone using that argument has a hidden agenda behind it and even if they don’t, the false positives are not enough to change my mind, it’s a net positive personal policy, sorry not sorry
“Don’t be eurocentric” is not an urgent problem at all, “Don’t be needlessly inefficient just to virtual signal group affiliations” is an even bigger problem in the grand scheme of things, what if that user never gets to use the app because he never manages to understand the UI? also most developers aren’t in a good enough position in the market where they can manage to lose users by such a trivialities
How and why would an AI kill us in the next two years? triggering WW3?, an attack of that caliber would leave it without the ability to replace/mantain it’s own infrastructure, just curious