Thank you, that’s what I was trying to get at, but didn’t know how.
irrational
Fair enough. But the arguments themselves must also update my belief. It should not ever be the case that this meta stuff completely cancels out an argument that I think is valid. That is irrational, just like not listening to someone who belongs to the enemy.
If you grant me that I am motivated to believe in something false, I think it would not take a super-inteliigent AI to convince me. I could go to a monastery in Tibet, isolate from society and ask the best of them to argue with me every day, study all their books, reading nothing at all that contradicts them. I think it might work. As I pointed out, there are historical examples of people converting to a religion they initially despised. Would my argument not work equally well in this case?
Maybe, but this meta stuff is giving me a headache. Should I update belief about belief, or just plain belief?:)
Suppose, I am going to read a book by a top Catholic theologian. I know he is probably smarter than me, because of the number of priests in the world, and their average IQ and intellectual abilities, etc, I figure the smartest of them is probably really really smart and more well read and has the very best arguments the Church found in 2000 years. If I read his book, should I take it into account and discount his evidence because of this meta information? Or should I evaluate the evidence?
It’s the very fallacy Eliezer argues against where people know about clever arguers and use this fact against everyone else.
I see now. No, my P(any book on X will convince me of X) is high, for all X. P(religion X is true) is low for all X, except X I actually believe in.
in general minimize the number of logical errors / tricks used.
For a true proposition, it should be possible to bring it to 0. For all else, use as few as possible (even if it means thousands). It’s probably a good policy anyway, as I originally claimed.
You have a very compelling point and I have to think about it. But there is meta-reasoning involved which is really tricky. As I start to read the book, I have some P(zoroastrianism is true). It’s non-zero. Now I read the first chapter, it has some positive evidence for Z in it. I expected to see some evidence, but it is actual evidence which I have not previously considered. Should I adjust my P(Z is true) up? I think I must. So, if the book has many chapters, I must either get close to 1, or else start converging to some p < 1. Are you arguing for the latter?
I also know many cases similar to what you describe which is why I tried to come up with this argument.
Here’s another link about Eliezer arguing against self-deception. Perhaps he is only claiming that it is very hard, not impossible.
Actually, I take back the first part. I have some prior P(zoroastrianism is true). The fact that the P(book about zoroastrianism uses spurious reasoning designed to convinced me it’s true and it actually is true) is lower, is irrelevant. I don’t care if it’s true, I only care about Zoroastrianism itself. Besides, with the initial condition on reasoning used, this second probability is also bounded because if Zoroastrianism is true, the book is in fact perfectly valid. So P(The book is lying) = P(Zoroastrianism is false).
Thanks, I was hoping someone would do a detailed critique:
I took that into account, and my prior was really low that I would ever believe it.
Was it? See the following passage:
I know I would think he’s not there after I read it
So, I believe that any of these books would very likely convince me of its truth. Are you saying that I should therefore have a zero prior for each of them? I think not. It needs to be some fixed value. But the AI can estimate it and provide enough evidence to override it. Should I also take this into account? And AI will take it into account also, and we go down the path of infinite adjustments which either converges to 0 or to a positive number. I think in the end I can’t assign zero probability, and infinitesimals don’t exist, so it has to be a positive number. And I lost at this point.
For the second part, “Why should the AI do that” is a rhetorical question, obviously not an argument.
As far “Valid argument is the best way to demonstrate the truth of something that is in fact true.”, it does not have to be true, it’s a good point. However, it’s not critical. The AI could have picked a valid argument, if it so desired. In fact, I can add this to initial conditions: AI must pick a valid argument for any proposition it argues, if it exists, and in general minimize the number of logical errors / tricks used.
Thanks for the link. I am really curious how he did it:)
On self-deception
I think destruction of the environment, even unpopulated, is indeed not a victimless crime, since it can have various external consequences.
Fifth, there are victimless transgressions, such as necrophilia, consensual sibling incest, destruction of (unpopulated) places in the environment, or desecration of a grave of someone who has no surviving relative. Empathy makes no sense in these cases.
It is also unclear to me that these should be subject to any moral judgement.
Believing in Christianity while practicing the 613 commandments would be not ok.
I always wonder about that. I know it’s true, but I am not sure what argument they put forward to explain that. I can’t remember anything that requires one to think anything specific in the Tanach.
I believe you may be right for Christianity, but not for Judaism, for instance. For Judaism, I think, it is not very relevant why you act righteously, as long as you do. As long as you don’t eat pork, worship God, don’t light a fire during Shabbat, and do the other 600+ things, you are probably OK. Although I am not a practicing Jew, so someone may correct me.
I am not completely sure what you mean.