I don’t see it as dogmatism so much as a verbal confusion. The ubiquity of UFs can be defended using a broad (implicit) definition, but the conclusions typically drawn about types of AI danger and methods of AI safety relate to a narrower definition, where a Ufmks
Explicitly coded
And/or
Fixed, unupdateable
And/or
“Thick” containing detailed descriptions of goals.
I don’t see it as dogmatism so much as a verbal confusion. The ubiquity of UFs can be defended using a broad (implicit) definition, but the conclusions typically drawn about types of AI danger and methods of AI safety relate to a narrower definition, where a Ufmks
Explicitly coded And/or
Fixed, unupdateable And/or
“Thick” containing detailed descriptions of goals.