I’m curious about the extent to which people:
agree with this argument,
expect to find a form of induction to avoid this problem (e.g. by incorporating the anthropic update),
expect to completely avoid anything like the universal prior (e.g. via UDT)
Isn’t this more metaphysics than actual AI?
I’m curious about the extent to which people:
agree with this argument,
expect to find a form of induction to avoid this problem (e.g. by incorporating the anthropic update),
expect to completely avoid anything like the universal prior (e.g. via UDT)
Isn’t this more metaphysics than actual AI?