From the posts they make, everyone on this site seems to me to be sufficiently intelligent as to make “selling snake oil” impossible, in a cut-and-dry case like the AI box.
So what do you think even happened, anyway, if you think the obvious explanation is impossible?
Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).
Ah, sorry. This brand of impossible.
What ever is the brand, any “impossibilities” that happen should lower your confidence in the reasoning that deemed them “impossibilities” in the first place. I don’t think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.
edit: example. I would deem it quite unlikely that Yudkowsky could, for example, score highly on a programming contest with competent participants or in any other conventional, validated, reliable metric of technical expertise and ability, under good contest rules (i.e. excluding the possibility of externals assistance). So if he did something like that, I’d be quite surprised, and lower the confidence in what ever models deemed that impossible; good old Bayes. I’m far more confident in the validity of those conventional metrics (and in lack of alternate modes of passing, such as persuasion) than in my assessment so my assessment would change the most. Meanwhile, when it’s some unconventional game, well, even if I thought that this game is difficult, I’d be much less confident in the reasoning “it looks hard so it must be hard” than the low prior of exceptional performance is low.
What ever is the brand, any “impossibilities” that happen should lower your confidence in the reasoning that deemed them “impossibilities” in the first place. I don’t think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.
Further, in this case the whole purpose of the experiment was to demonstrate that an AI could “take over a gatekeeper’s mind through a text channel” (something previously deemed “impossible”). As far as that goes it was, in my view, successful.
It’s clearly possible for some values of “gatekeeper”, since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers
Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).
Still have no idea what you’re talking about. What I originally said was: “the people who talk to Yudkowsky are intelligent” does not follow from “Yudkowsky is not a crank”; I independently judge those people to be intelligent.
What ever is the brand, any “impossibilities” that happen should lower your confidence in the reasoning that deemed them “impossibilities” in the first place.
“Impossible,” here, is used in the sense that “I have no idea where to start thinking about where to start thinking about how to do this.” It is clearly not actually impossible because it’s been done, twice.
I thought your “impossible” at least implied “improbable” under some sort of model.
edit: and as of having no idea, you just need to know the shared religious-ish context. Which these folks generally keep hidden from a causal observer.
Impossible is being used as a statement of difficulty. Someone who has “done the impossible” has obviously not actually done something impossible, merely done something that I have no idea where to start trying.
Seeing that “it is possible to do” doesn’t seem like it would have much effect on my assessment of how difficult it is, after the first. It certainly doesn’t have match effect on “It is very-very-difficult-impossible for linkhyrule5 to do such a thing.”
and as of having no idea, you just need to know the shared religious-ish context. Which these folks generally keep hidden from a causal observer.
What?
First, I’m pretty sure you mean “casual.” Second, I’m hardly a casual observer, though I haven’t read everything either. Third, most religions don’t let their leading figures (or much of anyone, really) change their minds on important things...
I thought you wanted to persuade others.
So what do you think even happened, anyway, if you think the obvious explanation is impossible?
Yes, but I don’t see why this is relevant
Ah, sorry. This brand of impossible.
Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).
What ever is the brand, any “impossibilities” that happen should lower your confidence in the reasoning that deemed them “impossibilities” in the first place. I don’t think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.
edit: example. I would deem it quite unlikely that Yudkowsky could, for example, score highly on a programming contest with competent participants or in any other conventional, validated, reliable metric of technical expertise and ability, under good contest rules (i.e. excluding the possibility of externals assistance). So if he did something like that, I’d be quite surprised, and lower the confidence in what ever models deemed that impossible; good old Bayes. I’m far more confident in the validity of those conventional metrics (and in lack of alternate modes of passing, such as persuasion) than in my assessment so my assessment would change the most. Meanwhile, when it’s some unconventional game, well, even if I thought that this game is difficult, I’d be much less confident in the reasoning “it looks hard so it must be hard” than the low prior of exceptional performance is low.
Further, in this case the whole purpose of the experiment was to demonstrate that an AI could “take over a gatekeeper’s mind through a text channel” (something previously deemed “impossible”). As far as that goes it was, in my view, successful.
It’s clearly possible for some values of “gatekeeper”, since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers
Still have no idea what you’re talking about. What I originally said was: “the people who talk to Yudkowsky are intelligent” does not follow from “Yudkowsky is not a crank”; I independently judge those people to be intelligent.
“Impossible,” here, is used in the sense that “I have no idea where to start thinking about where to start thinking about how to do this.” It is clearly not actually impossible because it’s been done, twice.
And point about the contest.
I thought your “impossible” at least implied “improbable” under some sort of model.
edit: and as of having no idea, you just need to know the shared religious-ish context. Which these folks generally keep hidden from a causal observer.
Impossible is being used as a statement of difficulty. Someone who has “done the impossible” has obviously not actually done something impossible, merely done something that I have no idea where to start trying.
Seeing that “it is possible to do” doesn’t seem like it would have much effect on my assessment of how difficult it is, after the first. It certainly doesn’t have match effect on “It is very-very-difficult-impossible for linkhyrule5 to do such a thing.”
What?
First, I’m pretty sure you mean “casual.” Second, I’m hardly a casual observer, though I haven’t read everything either. Third, most religions don’t let their leading figures (or much of anyone, really) change their minds on important things...