There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it. But if the book is a lot more compelling than that, then the book is probably above average both in arguing skill and in actual evidence. You cannot discount it solely as empty anymore, but neither should you assume that all of the “excess” convincing came from evidence—the book could just be unusually well written. You have to balance the improbabilities of evidence vs. writing, and update on the evidence found in that way.
Usually, the uncertainty grows with the size of the thing you’re trying to measure. This means that when thinking about super-duper-well-written books, the uncertainty in the writing skill gets really big. And so when balancing the improbabilities of evidence vs. writing, the evidence barely has to do any balancing at all—the writing skill just washes it out.
If the amount of evidence presented is the same, it’s better to hear about the truth from a child than from an orator, because the child doesn’t have all those orating skills mucking up your signal-to-noise.
There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it.
Right. I think my argument hinges on the fact that AI knows how much you intend to subtract before you read the book, and can make it be more convincing than this amount.
I don’t think it’s okay to have the AI’s convincingness be truly infinite, in the full inf—inf = undefined sense. Your math will break down. Safer just to represent “suppose there’s a super-good arguer” by having the convincingess be finite, but larger than every other scale in the problem.
There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it. But if the book is a lot more compelling than that, then the book is probably above average both in arguing skill and in actual evidence. You cannot discount it solely as empty anymore, but neither should you assume that all of the “excess” convincing came from evidence—the book could just be unusually well written. You have to balance the improbabilities of evidence vs. writing, and update on the evidence found in that way.
Usually, the uncertainty grows with the size of the thing you’re trying to measure. This means that when thinking about super-duper-well-written books, the uncertainty in the writing skill gets really big. And so when balancing the improbabilities of evidence vs. writing, the evidence barely has to do any balancing at all—the writing skill just washes it out.
If the amount of evidence presented is the same, it’s better to hear about the truth from a child than from an orator, because the child doesn’t have all those orating skills mucking up your signal-to-noise.
Right. I think my argument hinges on the fact that AI knows how much you intend to subtract before you read the book, and can make it be more convincing than this amount.
I don’t think it’s okay to have the AI’s convincingness be truly infinite, in the full inf—inf = undefined sense. Your math will break down. Safer just to represent “suppose there’s a super-good arguer” by having the convincingess be finite, but larger than every other scale in the problem.