Does the inner / outer distinction complicate the claim that all current ML systems are utility maximizers? The gradient descent algorithm performs a simple kind of optimization in the training phase. But once the model is trained and in production, it doesn’t seem obvious that the “utility maximizer” lens is always helpful in understanding its behavior.
Does the inner / outer distinction complicate the claim that all current ML systems are utility maximizers? The gradient descent algorithm performs a simple kind of optimization in the training phase. But once the model is trained and in production, it doesn’t seem obvious that the “utility maximizer” lens is always helpful in understanding its behavior.