Any agent that makes decisions has an implicit decision theory, it just might not be a very good one. I don’t think anyone ever said advanced decision theory was required for AGI, only for robust alignment.
Any agent that makes decisions has an implicit decision theory, it just might not be a very good one. I don’t think anyone ever said advanced decision theory was required for AGI, only for robust alignment.