Depending on how you define “utility”, I think trust could be seen as a “utility signal”: People trust someone or something because they think it is beneficial to them, respects their values, and so on. One advantage would be that you don’t have to define what exactly these values are—an honest trust-maximizer would find that out for itself and try to adhere to them because this increases trust. Another advantage is the asymmetry described above, which hopefully makes deception less likely (though this is still an open problem). However, a trust maximiser could probably be seen as just one special kind of utility maximiser, so there isn’t a fundamental difference.
Depending on how you define “utility”, I think trust could be seen as a “utility signal”: People trust someone or something because they think it is beneficial to them, respects their values, and so on. One advantage would be that you don’t have to define what exactly these values are—an honest trust-maximizer would find that out for itself and try to adhere to them because this increases trust. Another advantage is the asymmetry described above, which hopefully makes deception less likely (though this is still an open problem). However, a trust maximiser could probably be seen as just one special kind of utility maximiser, so there isn’t a fundamental difference.