I don’t think that your typical prison inmate is a perfect Bayesian.
I rather think that that should be, ideally, adjusted so that overall utility is maximized (weighing the utility of prisoners equally as the utility of the rest), which will be vastly different both from reality and from your model assuming the above proposition.
Not “almost all are completely convinced”; according to this poll, 61 supposed experts “thought P != NP” (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked “theorists”, partly people he knew, but also partly by posting to mailing lists—I’m pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)
Even that case wouldn’t increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn’t prove much more than the non-experts (who had light exposition to an issue) confidence.