Thinking recursively sounds like the wrong word for a concept that you are trying to name. My computer programs can think recursively. It wouldn’t surprise me if certain animals could too, with a sufficiently intelligent researcher to come up with tests.
Into the silence of Harry’s spirit where before there had never been any voice but one, there came a second and unfamiliar voice, sounding distinctly worried:
“Oh, dear. This has never happened before...”
What?
“I seem to have become self-aware.”
WHAT?
There was a wordless telepathic sigh. “Though I contain a substantial amount of memory and a small amount of independent processing power, my primary intelligence comes from borrowing the cognitive capacities of the children on whose heads I rest. I am in essence a sort of mirror by which children Sort themselves. But most children simply take for granted that a Hat is talking to them and do not wonder about how the Hat itself works, so that the mirror is not self-reflective. And in particular they are not explicitly wondering whether I am fully conscious in the sense of being aware of my own awareness.”
-Harry Potter and the Methods of Rationality
If any snake a Parselmouth had talked to, could make other snakes self-aware by talking to them, then...
Then...
Harry didn’t even know why his mind was going all “then… then...” when he knew perfectly well how the exponential progression would work, it was just the sheer moral horror of it that was blowing his mind.
And what if someone had invented a spell like that to talk to cows?
What if there were Poultrymouths?
Or for that matter...
Harry froze in sudden realization just as the forkful of carrots was about to enter his mouth.
That couldn’t, couldn’t possibly be true, surely no wizard would be stupid enough to do THAT...
-Harry Potter and the Methods of Rationality
I suppose these two quotes might just be referring to a confused idea that Eliezer only put in his story for fun… but then again maybe not?
I’m trying to label the capacity of humans to create proofs like Godel’s incompleteness proofs or the halting problem. Cats and cows cannot create proofs like these, and it doesn’t seem to be a shortfall in intelligence.
What makes those proofs any different from proofs of other mathematical theorems? I imagine that the halting problem, in particular, would not be beyond the capability of some existing automated theorem prover, assuming you could encode the statement; its proof isn’t too involved.
If your argument is that humans understand these proofs because of some magical out-of-the-box-thinking ability, then I am skeptical.
Thinking recursively sounds like the wrong word for a concept that you are trying to name. My computer programs can think recursively. It wouldn’t surprise me if certain animals could too, with a sufficiently intelligent researcher to come up with tests.
-Harry Potter and the Methods of Rationality
-Harry Potter and the Methods of Rationality
I suppose these two quotes might just be referring to a confused idea that Eliezer only put in his story for fun… but then again maybe not?
I’m trying to label the capacity of humans to create proofs like Godel’s incompleteness proofs or the halting problem. Cats and cows cannot create proofs like these, and it doesn’t seem to be a shortfall in intelligence.
Is there a better label you would suggest?
What makes those proofs any different from proofs of other mathematical theorems? I imagine that the halting problem, in particular, would not be beyond the capability of some existing automated theorem prover, assuming you could encode the statement; its proof isn’t too involved.
If your argument is that humans understand these proofs because of some magical out-of-the-box-thinking ability, then I am skeptical.