For example, building a machine that is sceptical of Pascal’s wager doesn’t seem harder than building a machine that is sceptical of other verbal offers unsupported by evidence.
The verbal offer isn’t actually relevant to the problem, it’s just there to dramatize the situation.
I don’t see what’s wrong with the idea that “extraordinary claims require extraordinary evidence”.
Please formulate that maxim precisely enough to program into an AI in a way that solves the problem. Because the best way we currently have of formulating it, i.e., Bayseanism with quasi-Solomonoff priors doesn’t solve it.
The idea of devoting more resources to investigating claims when they involve potential costs is involves decision theory rather than just mere prediction. However, vanilla reinforcement learning should handle this OK. Agents that don’t investigate extraordinary claims will be exploited and suffer—and a conventional reinforcement learning agent can be expected to pick up on this just fine. Of course I can’t supply source code—or else we would be done—but that’s the general idea.
The idea of devoting more resources to investigating claims when they involve potential costs is involves decision theory rather than just mere prediction.
All claims involve decision theory in the sense that you’re presumably going to act on them at some point.
However, vanilla reinforcement learning should handle this OK. Agents that don’t investigate extraordinary claims will be exploited and suffer—and a conventional reinforcement learning agent can be expected to pick up on this just fine.
Would these agents also learn to pick up pennies in front of steam rollers? In fact, falling for Pascal’s mugging is just the extreme case of refusing to pick up pennies in front of a steam roller, the question is where you draw a line dividing the two.
However, vanilla reinforcement learning should handle this OK. Agents that don’t investigate extraordinary claims will be exploited and suffer—and a conventional reinforcement learning agent can be expected to pick up on this just fine.
Would these agents also learn to pick up pennies in front of steam rollers?
That depends on its utility function.
In fact, falling for Pascal’s mugging is just the extreme case of refusing to pick up pennies in front of a steam roller, the question is where you draw a line dividing the two.
The line (if any) is drawn as a consequence of specifying a utility function.
The verbal offer isn’t actually relevant to the problem, it’s just there to dramatize the situation.
Please formulate that maxim precisely enough to program into an AI in a way that solves the problem. Because the best way we currently have of formulating it, i.e., Bayseanism with quasi-Solomonoff priors doesn’t solve it.
The idea of devoting more resources to investigating claims when they involve potential costs is involves decision theory rather than just mere prediction. However, vanilla reinforcement learning should handle this OK. Agents that don’t investigate extraordinary claims will be exploited and suffer—and a conventional reinforcement learning agent can be expected to pick up on this just fine. Of course I can’t supply source code—or else we would be done—but that’s the general idea.
All claims involve decision theory in the sense that you’re presumably going to act on them at some point.
Would these agents also learn to pick up pennies in front of steam rollers? In fact, falling for Pascal’s mugging is just the extreme case of refusing to pick up pennies in front of a steam roller, the question is where you draw a line dividing the two.
That depends on its utility function.
The line (if any) is drawn as a consequence of specifying a utility function.