Ensuring that is part of being a rationalist; if EY, Roko, and Vlad (apparently Alicorn as well?) were bad at error-checking and Vaniver was good at it, that would be sufficient to say that Vaniver is a better rationalist than E R V (A?) put together.
Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.
“For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review.” - Ross Anderson, RISKS Digest vol 18 no 25
Until a clever new thing has had decent outside review, it just doesn’t count as knowledge yet.
Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.
That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb’s Theorem is strong evidence that he is good at error-checking himself.
That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb’s Theorem is strong evidence that he is good at error-checking himself.
That’s pretty much a circular argument. How’s the third-party verifiable evidence look?
Mostly not—but then I am a human full of cognitive biases. Has anyone else in the field paid them any attention? Do they have any third-party notice at all? We’re talking here about somewhere north of a million words of closely-reasoned philosophy with direct relevance to that field’s big questions, for example. It’s quite plausible that it could be good and have no notice, because there’s not that much attention to go around; but if you want me to assume it’s as good as it would be with decent third-party tyre-kicking, I think I can reasonably ask for more than “the guy that wrote it and the people working at the institute he founded agree, and hey, do they look good to you?” That’s really not much of an argument in favour.
Put it this way: I’d be foolish to accept cryptography with that little outside testing as good, here you’re talking about operating system software for the human mind. It needs more than “the guy who wrote it and the people who work for him think it’s good” for me to assume that.
I haven’t read fluffy (I have named it fluffy), but I’d guess it’s an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like “only infectious to stupid people.”
Alicorn throws a bit of a wrench in this, as I don’t think she shares as many blind spots with the others you mention, but it’s still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.
Could also be that outsiders are resistant because they have blind spots where the idea is infectious, and respectable people on LW are respected because they do not have the blind spots—and so are infected.
I think these two views are actually the same, stated as inverses of each other. The term blind spot is problematic.
I think the term blind spot is accurate, unless (and I doubt it) Eliezer was lying when he later said fluffy was wrong. What fits the bill isn’t a correct scary idea, but merely a scary idea that fits into what the reader already thinks.
Maybe fluffy is a correct scary idea, and your allocation of blind spots (or discouraging of the use of the term) is correct, but secondhand evidence points towards fluffy being incorrect but scary to some people.
Honestly? Doesn’t like to argue about quantum mechanics. That I’ve seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.
Ensuring that is part of being a rationalist; if EY, Roko, and Vlad (apparently Alicorn as well?) were bad at error-checking and Vaniver was good at it, that would be sufficient to say that Vaniver is a better rationalist than E R V (A?) put together.
Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.
“For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review.” - Ross Anderson, RISKS Digest vol 18 no 25
Until a clever new thing has had decent outside review, it just doesn’t count as knowledge yet.
That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb’s Theorem is strong evidence that he is good at error-checking himself.
That’s pretty much a circular argument. How’s the third-party verifiable evidence look?
I dunno. Do the Sequences smell like bullshit to you?
edit: this is needlessly antagonistic. Sorry.
Mostly not—but then I am a human full of cognitive biases. Has anyone else in the field paid them any attention? Do they have any third-party notice at all? We’re talking here about somewhere north of a million words of closely-reasoned philosophy with direct relevance to that field’s big questions, for example. It’s quite plausible that it could be good and have no notice, because there’s not that much attention to go around; but if you want me to assume it’s as good as it would be with decent third-party tyre-kicking, I think I can reasonably ask for more than “the guy that wrote it and the people working at the institute he founded agree, and hey, do they look good to you?” That’s really not much of an argument in favour.
Put it this way: I’d be foolish to accept cryptography with that little outside testing as good, here you’re talking about operating system software for the human mind. It needs more than “the guy who wrote it and the people who work for him think it’s good” for me to assume that.
Fair enough. It is slightly more than Vaniver has going in their favour, to return to my attempt to balance their rationality against each other.
Upvoted to zero because of the edit.
I haven’t read fluffy (I have named it fluffy), but I’d guess it’s an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like “only infectious to stupid people.”
Alicorn throws a bit of a wrench in this, as I don’t think she shares as many blind spots with the others you mention, but it’s still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.
Could also be that outsiders are resistant because they have blind spots where the idea is infectious, and respectable people on LW are respected because they do not have the blind spots—and so are infected.
I think these two views are actually the same, stated as inverses of each other. The term blind spot is problematic.
I think the term blind spot is accurate, unless (and I doubt it) Eliezer was lying when he later said fluffy was wrong. What fits the bill isn’t a correct scary idea, but merely a scary idea that fits into what the reader already thinks.
Maybe fluffy is a correct scary idea, and your allocation of blind spots (or discouraging of the use of the term) is correct, but secondhand evidence points towards fluffy being incorrect but scary to some people.
I’m curious about why you think this.
Honestly? Doesn’t like to argue about quantum mechanics. That I’ve seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.
I was also surprised by your reaction to the the argument. In my case this was due to the opinions you’ve expressed on normative ethics.
How are my ethical beliefs related?
Answered by PM