Thanks, I was hoping someone would do a detailed critique:
I took that into account, and my prior was really low that I would ever believe it.
Was it? See the following passage:
I know I would think he’s not there after I read it
So, I believe that any of these books would very likely convince me of its truth. Are you saying that I should therefore have a zero prior for each of them? I think not. It needs to be some fixed value. But the AI can estimate it and provide enough evidence to override it. Should I also take this into account? And AI will take it into account also, and we go down the path of infinite adjustments which either converges to 0 or to a positive number. I think in the end I can’t assign zero probability, and infinitesimals don’t exist, so it has to be a positive number. And I lost at this point.
For the second part, “Why should the AI do that” is a rhetorical question, obviously not an argument.
As far “Valid argument is the best way to demonstrate the truth of something that is in fact true.”, it does not have to be true, it’s a good point. However, it’s not critical. The AI could have picked a valid argument, if it so desired. In fact, I can add this to initial conditions: AI must pick a valid argument for any proposition it argues, if it exists, and in general minimize the number of logical errors / tricks used.
As an atheist, your prior was low that the Christianity book would convince you, but as a Zoroastrian, your prior is now high that the Christianity book would convince you? I’m saying that you seem to have changed your opinion about the books.
I can add this to initial conditions: AI must pick a valid argument for any proposition it argues
(Can’t find SMBC Delphic comic. Looking.)
in general minimize the number of logical errors / tricks used.
It’s arguing for false propositions. You can specify a “Sudden volcanic eruption sufficient to destroy every Island in Indonesia that minimizes harm to humans”, but don’t be surprised if a few people are inconvenienced by it, considering what the minimum requirements to meet the first conditions are.
I see now. No, my P(any book on X will convince me of X) is high, for all X. P(religion X is true) is low for all X, except X I actually believe in.
in general minimize the number of logical errors / tricks used.
For a true proposition, it should be possible to bring it to 0. For all else, use as few as possible (even if it means thousands). It’s probably a good policy anyway, as I originally claimed.
There are a few hundred people in deep caves on the Anatolian plateau that thank you for minimizing the force of the Indonesian caldera, sparing them and allowing them to attempt to continue the human race.
The magnitude of the wrongness isn’t really an issue. The point was that with the rule that “real arguments have to be used when available”, he can think that the book he just read convinced him with real arguments.
Actually, I take back the first part. I have some prior P(zoroastrianism is true). The fact that the P(book about zoroastrianism uses spurious reasoning designed to convinced me it’s true and it actually is true) is lower, is irrelevant. I don’t care if it’s true, I only care about Zoroastrianism itself. Besides, with the initial condition on reasoning used, this second probability is also bounded because if Zoroastrianism is true, the book is in fact perfectly valid. So P(The book is lying) = P(Zoroastrianism is false).
Was it? See the following passage:
Also:
is an argument from ignorance, and
is not true, it’s not even properly wrong since it assumes the mind projection fallacy.
Thanks, I was hoping someone would do a detailed critique:
So, I believe that any of these books would very likely convince me of its truth. Are you saying that I should therefore have a zero prior for each of them? I think not. It needs to be some fixed value. But the AI can estimate it and provide enough evidence to override it. Should I also take this into account? And AI will take it into account also, and we go down the path of infinite adjustments which either converges to 0 or to a positive number. I think in the end I can’t assign zero probability, and infinitesimals don’t exist, so it has to be a positive number. And I lost at this point.
For the second part, “Why should the AI do that” is a rhetorical question, obviously not an argument.
As far “Valid argument is the best way to demonstrate the truth of something that is in fact true.”, it does not have to be true, it’s a good point. However, it’s not critical. The AI could have picked a valid argument, if it so desired. In fact, I can add this to initial conditions: AI must pick a valid argument for any proposition it argues, if it exists, and in general minimize the number of logical errors / tricks used.
As an atheist, your prior was low that the Christianity book would convince you, but as a Zoroastrian, your prior is now high that the Christianity book would convince you? I’m saying that you seem to have changed your opinion about the books.
(Can’t find SMBC Delphic comic. Looking.)
It’s arguing for false propositions. You can specify a “Sudden volcanic eruption sufficient to destroy every Island in Indonesia that minimizes harm to humans”, but don’t be surprised if a few people are inconvenienced by it, considering what the minimum requirements to meet the first conditions are.
I see now. No, my P(any book on X will convince me of X) is high, for all X. P(religion X is true) is low for all X, except X I actually believe in.
For a true proposition, it should be possible to bring it to 0. For all else, use as few as possible (even if it means thousands). It’s probably a good policy anyway, as I originally claimed.
There are a few hundred people in deep caves on the Anatolian plateau that thank you for minimizing the force of the Indonesian caldera, sparing them and allowing them to attempt to continue the human race.
The magnitude of the wrongness isn’t really an issue. The point was that with the rule that “real arguments have to be used when available”, he can think that the book he just read convinced him with real arguments.
I was wrong about the importance of this factor.
Actually, I take back the first part. I have some prior P(zoroastrianism is true). The fact that the P(book about zoroastrianism uses spurious reasoning designed to convinced me it’s true and it actually is true) is lower, is irrelevant. I don’t care if it’s true, I only care about Zoroastrianism itself. Besides, with the initial condition on reasoning used, this second probability is also bounded because if Zoroastrianism is true, the book is in fact perfectly valid. So P(The book is lying) = P(Zoroastrianism is false).