Here’s an alternative question if you don’t want to answer bogdanb’s: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?
Voted back up. He will not answer but there’s no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.
Yes, there is harm in asking. Provoking people to break contractual agreements they’ve made with others and have made clear they regard as vital, generally counts as Not. Cool.
In this case though, it’s clear that Eliezer wants people to get something out of knowing about the AI box experiments. That’s my extrapolated Eliezer volition at least. Since for me and many others we can’t get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition.
In most situations I would agree that it’s not cool to push.
As the OP said, Eliezer hasn’t been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is ‘up’.
(At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It’s a little blurry.)
IIRC* the agreement was to not disclose the contents of a contest without the agreement of both participants. My hope was not that Eliezer might break his word, but that evidence of continued interest in the matter might persuade him to obtain permission from at least one of his former opponents. (And to agree himself, as the case may be.)
(*: and my question was based on that supposition)
How did you win any of the AI-in-the-box challenges?
http://news.ycombinator.com/item?id=195959
“Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...
All right, this much of a hint:
There’s no super-clever special trick to it. I just did it the hard way.
Something of an entrepreneurial lesson there, I guess.”
I know that part. I was hoping for a bit more...
Here’s an alternative question if you don’t want to answer bogdanb’s: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?
Something tells me he won’t answer this one. But I support the question! I’m awfully curious as well.
Perhaps this would be a more appropriate version of the above:
What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?
Voted down. Eliezer Yudkowsky has made clear he’s not answering that, and it seems like an important issue for him.
Voted back up. He will not answer but there’s no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.
Yes, there is harm in asking. Provoking people to break contractual agreements they’ve made with others and have made clear they regard as vital, generally counts as Not. Cool.
In this case though, it’s clear that Eliezer wants people to get something out of knowing about the AI box experiments. That’s my extrapolated Eliezer volition at least. Since for me and many others we can’t get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition.
In most situations I would agree that it’s not cool to push.
As the OP said, Eliezer hasn’t been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is ‘up’.
(At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It’s a little blurry.)
It depends on the probability distribution of comments.
Good point. Probability distribution of comments relative to those doing the evaluation.
IIRC* the agreement was to not disclose the contents of a contest without the agreement of both participants. My hope was not that Eliezer might break his word, but that evidence of continued interest in the matter might persuade him to obtain permission from at least one of his former opponents. (And to agree himself, as the case may be.)
(*: and my question was based on that supposition)