“I can fly” doesn’t sound like a particularly high-utility false belief.
True, but that’s part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.
Additionally, any false belief will bring you into conflict with reality eventually. “I can fly” just illustrates this dramatically.
Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility.
In fact, Kindly gives an example of each here.
In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn’t have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of “I can fly” does not even come close to generalizing to all possible false beliefs.
True, but that’s part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.
Additionally, any false belief will bring you into conflict with reality eventually. “I can fly” just illustrates this dramatically.
Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility. In fact, Kindly gives an example of each here.
In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn’t have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of “I can fly” does not even come close to generalizing to all possible false beliefs.