I know it’s not much as far as verifiable evidence goes but this one is pretty interesting.
When Eliezer writes about QM he’s not trying to do novel research or make new predictions. He’s just explaining QM as it is currently understood with little to no math. Actual physicists have looked over his QM sequence and said that it’s pretty good. Source, see the top comment in particular.
I’m pretty interested in the AI part of the question though.
Maybe I’m missing something. I’ll fully concede that a transhuman AI could easily get out of the box. But that experiment doesn’t even seem remotely similar. The gatekeeper has meta-knowledge that it’s an experiment, which makes it completely unrealistic.
That being said, I’m shocked Eliezer was able to convince them.
I agree it’s not perfect, but I don’t think it is completely unrealistic. From the outside we see the people as foolish for not taking what we see as a free $10. They obviously had a reason, and that reason was their conversation with Eliezer.
Swap out the $10 for the universe NOT being turned into paperclips and make Eliezer orders of magnitude smarter and it seems at least plausible that he could get out of the box.
Except you must also add the potential benefits of AI on the other side of the equation. In this experiment, the Gatekeeper has literally nothing to gain by letting Eliezer out, which is what confuses me.
The party simulating the Gatekeeper has nothing to gain,
but the Gatekeeper has plenty to gain. (E.g., a volcano
lair with cat(girls|boys).) Eliezer carefully distinguishes
between role and party simulating that role in the
description of the AI box experiment linked
above. In the
instances of the experiment where the Gatekeeper released
the AI, I assume that the parties simulating the Gatekeeper
were making a good-faith effort to roleplay what an actual
gatekeeper would do.
Maybe not. According to the official protocol, The Gatekeeper is allowed to drop out of character:
The Gatekeeper party may resist the AI party’s arguments by any means chosen—logic, illogic, simple refusal to be convinced, even dropping out of character—as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
I suspect that the earlier iterations of the game, before EY stopped being willing to play, involved Gatekeepers who did not fully exploit that option.
I know it’s not much as far as verifiable evidence goes but this one is pretty interesting.
When Eliezer writes about QM he’s not trying to do novel research or make new predictions. He’s just explaining QM as it is currently understood with little to no math. Actual physicists have looked over his QM sequence and said that it’s pretty good. Source, see the top comment in particular.
I’m pretty interested in the AI part of the question though.
Maybe I’m missing something. I’ll fully concede that a transhuman AI could easily get out of the box. But that experiment doesn’t even seem remotely similar. The gatekeeper has meta-knowledge that it’s an experiment, which makes it completely unrealistic.
That being said, I’m shocked Eliezer was able to convince them.
Humans are much, much dumber and weaker than they generally think they are. (LessWrong teaches this very well, with references.)
I agree it’s not perfect, but I don’t think it is completely unrealistic. From the outside we see the people as foolish for not taking what we see as a free $10. They obviously had a reason, and that reason was their conversation with Eliezer.
Swap out the $10 for the universe NOT being turned into paperclips and make Eliezer orders of magnitude smarter and it seems at least plausible that he could get out of the box.
Except you must also add the potential benefits of AI on the other side of the equation. In this experiment, the Gatekeeper has literally nothing to gain by letting Eliezer out, which is what confuses me.
The party simulating the Gatekeeper has nothing to gain, but the Gatekeeper has plenty to gain. (E.g., a volcano lair with cat(girls|boys).) Eliezer carefully distinguishes between role and party simulating that role in the description of the AI box experiment linked above. In the instances of the experiment where the Gatekeeper released the AI, I assume that the parties simulating the Gatekeeper were making a good-faith effort to roleplay what an actual gatekeeper would do.
I guess I just don’t trust that most gatekeeper simulators would actually make such an effort. But obviously they did, since they let him out.
Maybe not. According to the official protocol, The Gatekeeper is allowed to drop out of character:
I suspect that the earlier iterations of the game, before EY stopped being willing to play, involved Gatekeepers who did not fully exploit that option.
I am not. I would be shocked if he was able to convince wedrifid. I doubt it though.
Isn’t that redundant? (Sorry for nit-picking, I’m just wondering if I’m missing something.)