Since the AI-Box experiment is shrouded in secrecy, I have to assign a significant probability that it is a simple hoax: the people you “fooled” were in on it, or that you used a technicality, or that there is a genuine effect but that digital urban legend has blown it out of all proportion.
However, I am intrigued. Will you email me and tell me this odd thing, if I promise to keep it secret?
My suspicion would be that it is related to
“I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart and lucky enough to understand this fact when the vast majority of other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.”
Or perhaps something bizzarre involving conscious experience (the thing in the world that I am most thoroughly confused about), anthropics and AGIs simulating me.
Did Eliezer have a specific thing in mind? I thought he meant that—like in the AI Box experiment—he suspects a human could already do what it’s being predicted a superintelligence could not. Without yet knowing how.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
Since the AI-Box experiment is shrouded in secrecy, I have to assign a significant probability that it is a simple hoax: the people you “fooled” were in on it, or that you used a technicality, or that there is a genuine effect but that digital urban legend has blown it out of all proportion.
However, I am intrigued. Will you email me and tell me this odd thing, if I promise to keep it secret?
My suspicion would be that it is related to
Or perhaps something bizzarre involving conscious experience (the thing in the world that I am most thoroughly confused about), anthropics and AGIs simulating me.
Did Eliezer have a specific thing in mind? I thought he meant that—like in the AI Box experiment—he suspects a human could already do what it’s being predicted a superintelligence could not. Without yet knowing how.
Well if he didn’t have a specific thing in mind he must have had a whole set of things in mind, so I urge him to pick one of them.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
Yes but this boils down to
-- “I think I can tell you LOTS of things about reality that will freak you our”
-- what, exactly?
-- I don’t know! I just have a strong intuition!
-- Well I have a strong intuition that you can’t…
Maybe he has a mathematical model.