From an AI safety viewpoint, this might greatly increase AI funding and drive talent into the field and so advance when we get a general artificial superintelligence.
Agreed. But that’s true for any AI advance. At least this one doesn’t seem to increase directly the existential risk (for AI at least) and to provide some positive in the world. So my point is more that if AI advances are unavoidable, I prefer to see more like this one.
From an AI safety viewpoint, this might greatly increase AI funding and drive talent into the field and so advance when we get a general artificial superintelligence.
Agreed. But that’s true for any AI advance. At least this one doesn’t seem to increase directly the existential risk (for AI at least) and to provide some positive in the world. So my point is more that if AI advances are unavoidable, I prefer to see more like this one.