This is intended as an interim solution, i.e. you would expect to transition to using a “correct” prior before accessing most of the universe’s resources (say within 1000 years). The point of this approach is to avoiding losing influence during the interim period.
If there are multiple unaligned AIs with different beliefs, you would take a weighted average of their beliefs using their current influence. As their influence changed, you would update the weighting.
(This might result in an incoherent / dutch-bookable set of beliefs, in which case you are free to run the dutch book and do even better.)
This is intended as an interim solution, i.e. you would expect to transition to using a “correct” prior before accessing most of the universe’s resources (say within 1000 years). The point of this approach is to avoiding losing influence during the interim period.
If there are multiple unaligned AIs with different beliefs, you would take a weighted average of their beliefs using their current influence. As their influence changed, you would update the weighting.
(This might result in an incoherent / dutch-bookable set of beliefs, in which case you are free to run the dutch book and do even better.)