osgt most naturally can make ai cooperate against humans if the humans don’t also understand how to formally bind themselves usefully and safely and reliably. however, humans are able to make fairly strongly binding commitments to use strategies that condition on how others choose strategies, and there are a bunch of less exact strategy inference papers I could go hunt down that are pretty interesting.
osgt most naturally can make ai cooperate against humans if the humans don’t also understand how to formally bind themselves usefully and safely and reliably. however, humans are able to make fairly strongly binding commitments to use strategies that condition on how others choose strategies, and there are a bunch of less exact strategy inference papers I could go hunt down that are pretty interesting.