He probably is an INTP, although it’s too early to tell. I am too. That doesn’t really answer the question:)
irrational
SInce we are on the subject of quotes, here’s one from C.S. Lewis, who I am not generally a fan of, but this is something that struck me when I read it for the first time:
“Oh, Piebald, Piebald,” she said, still laughing. “How often the people of your race speak!”
“I’m sorry,” said Ransom, a little put out. “What are you sorry for?”
“I am sorry if you think I talk too much”
“Too much? How can I tell what would be too much for you to talk?”
“In our world when they say a man talks much they mean they wish him to be silent.”
“If that is what they mean, why do they not say it?”
“What made you laugh?” asked Ransom, finding her question too hard.
That specific thing is not a human universal. But the general behavior is, as far as I know. There are always little lies one is supposed to say. E.g. “no, that woman is not as beautiful as you”, “he looks just like his dad”, “nice to meet you”, “please come again” (but I’ll never invite you). In Russian, in particular, the very act of greeting is often a lie, since it means “be healthy” and there is effectively no way to “greet” an enemy without wishing him well.
I am in fact not planning to interfere for now.
I don’t disagree necessarily, but this is way too subtle for a kid, so it’s not a practical answer.
Besides, as a semi-professional linguist, I must say you are confusing semantics (e.g. your boxes example) with pragmatics which is what we are talking about, where one uses words to mean something other than what the dictionary + propositional logic say they mean. These are often very confusing because they rely on cultural context and both kids and foreigners often screw up when they deal with them.
Well, it’s one thing not to give details and another to misreport. Even now, as an adult, I say “I am OK” when I mean “things suck”, and “I am great” when things are OK. I just shift them by a degree in the positive direction. Now, if he is unhappy, should he say “I am fine”? If he is not fine, he is lying.
Truth & social graces
I am not sure I completely follow, but I think the point is that you will in fact update the probability up if a new argument is more convincing than you expect. Since AI can better estimate what you expect it to do than you can estimate how convincing AI will make it, it will be able to make all arguments more convincing than you expect.
I am not convinced that 1984-style persuasion really works. I don’t think that one can really be persuaded to genuinely believe something by fear or torture. In the end you can get someone to respond as if they believe it, but probably not to actually do so. It might convince them to undergo something like what my experiment actually describes.
There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it.
Right. I think my argument hinges on the fact that AI knows how much you intend to subtract before you read the book, and can make it be more convincing than this amount.
So the person in the thought experiment doesn’t expect to agree with a book’s conclusion, before reading it.
No he expects that if he reads the book, his posterior belief in the proposition is likely going to be high. But his current prior belief in the truth of the proposition is low.
Also, as I made clear in my update, AI is not perfect, merely very good. I only need it to be good enough for the whole episode to go through, i.e. that you don’t argue that a rational person will never believe in Z after reading the book and my story is implausible.
I understand the principle, yes. But it means if your friend is a liar, no argument he gives needs to be examined on its own merits. But what if he is a liar and he saw a UFO? What if P(he is a liar) and P(there’s a UFO) are not independent? I think if they are independent, your argument works. If they are not, it doesn’t. If UFOs appear mostly to liars, you can’t ignore his evidence. Do you agree? In my case, they are not independent: it’s easier to argue for a true proposition, even for a very intelligent AI. Here I assume that P must be strictly less than 1 always.
We are running into meta issues that are really hard to wrap your head around. You believe that the book is likely to convince you, but it’s not absolutely guaranteed to. Whether it will do so surely depends on the actual arguments used. You’d expect, a priori, that if it argues for X which is more likely, its arguments would also be more convincing. But until you actually see the arguments, you don’t know that they will convince you. It depends on what they actually are. In your formulation, what happens if you read the book and the arguments do not convince you? Also, what if the arguments do not convince you, but only because you expect the book to be extremely convincing, is this different from the case of arguments taken without this meta-knowledge not convinving you?
You also can finish your PhD and then get an industry job. Your field has plenty of research scientist jobs. Some of them would be really cutting edge, too.
Also, if your degree is in CS, you should not need a postdoc either, before applying for faculty jobs. If it’s EE/CE, I don’t know. If you really want to do theoretical stuff, it pretty much requires a faculty position, perhaps at a small liberal arts college where they don’t care about grants much. Such places exist, and even have good (undergraduate) students.
Oh no, I am claiming that even a perfect reasoner can deceive himself. A normal person can easily do so. Many people who marry someone of a different faith become quite devout in their spouse’s religion. At some point they have to decide to believe something they don’t actually believe. It does not take a superintelligent AI to convince them, a local cleric can do it.
We rarely observe Christians trying to walk on water even though they should be able to, given enough faith. In fact they act as if it’s impossible. I assume that this is the sort of thing you are talking about? But we also see people trying faith healing even though it doesn’t work. Their model of the world really is different from yours. Likewise with scientologists and psychiatry. They aren’t faking it. If Z tells me that I must pray in order to be healed, and not take drugs (I have no idea if it does, probably not) and I do in fact do so, being convinced by the book that I must, would that be sufficient?
There are, I think, lots of people who have as good a model of how the world works as any here, who are still religious. In fact, if one is a Deist who believes that God pushed the button to start the Big Bang, they may have a model with an extra node in it subject to Occam’s razor, but it predicts reality equally well, at least until physicists understand the Big Bang better. Many other people have beliefs of purely “spiritual” type, having no observable effects.
But I think a Zoroastrian might not qualify, it’s true. So if I read the book and become one, I might be forced to believe that per [http://en.wikipedia.org/wiki/Zoroastrianism#Basic_beliefs] water was the first element to be created (and that it is in fact an element). I might be clever enough to rationalize it away, like many people do. E.g. water really refers to hydrogen here. If I can make myself believe in Ahura Mazda, I think I can also find a way to fit all the other beliefs in.
Oh, but I have a model of what a creationist believes. I can anticipate what arguments they advance and how to “excuse” them (i.e. explain them away) to some extent. Anyone who changed their belief system has this model for their previous system of belief.
How can this be true when different arguments have different strength and you don’t know what the statement is? Here, suppose you believe that you are about to read a completely valid argument in support of conventional arithmetic. Please update your belief now. Here is the statement: “2+2=4”. What if it instead was Russell’s Principia Mathematica?
I know this is not your main topic, but are you familiar with Good-Turing estimation? It’s a way of assigning non-arbitrary probability to unobserved events.