MIRI leadership is famously very wrong about how sure they think they are. That’s my concern. It’s obvious to any rationalist that it’s not rational to believe >99% in something that’s highly theoretical. It’s almost certainly epistemic hubris if not outright foolishness.
I have immense respect for EYs intellect. He seems to be the smartest human I’ve engaged with enough to judge their intellect. On this point he is either obviously or seemingly wrong. I have personally spent at least a hundred hours following his specific logic, (and lots more on the background knowledge it’s based on), and I’m personally quite sure he’s overestimating his certainty. His discussions with other experts always end up falling back on differing intuitions.He got there first, but a bunch of us have now put real time into following and extending his logic.
I have a whole theory on how he wound up so wrong, involving massive frustration and underappreciating how biased people are to short-term thinking and motivated reasoning, but that’s beside the point.
Whether he’s right doesn’t really matter; what matters is that >99.9% doom sounds crazy, and it’s really complex to argue it even could be right, let alone that it actually is.
Since it sounds crazy, leaning on that point is the very best way to harm MIRIs credibility. And because they are one of the most publicly visible advocates of AGI x-risk caution (and planning to become even higher profile it seems), it may make the whole thing sound less credible—maybe by a lot.
Please, please don’t do it or encourage others to do it.
I’m actually starting to worry that MIRI could make us worse off if they insist on shouting loudly and leaning on our least credible point. Public discourse isn’t rational, so focusing on the worst point could make the vibes-based public discussion go against what is otherwise a very simple and sane viewpoint: don’t make a smarter species unless you’re pretty sure it won’t turn on you.
Hopefully I needn’t worry, because MIRI has engaged communication experts, and they will resist just adopting EYs unreasonable doom estimate and bad comms strategy.
To your specific point: “we’re really not sure” is not a bad strategy if “we” means humanity as a whole (if by bad you mean dishonest).
If by bad you mean ineffective: do you seriously think people wouldn’t object to the push for AGI if they thought we were totally unsure?
“One guy who’s thought about this for a long time and some other people he recruited think it’s definitely going to fail” really seems like a way worse argument than “expert opinion is utterly split, so any fool can see we collectively are completely unsure it’s safe”.
By bad I mean dishonest, and by ‘we’ I mean the speaker (in this case, MIRI).
I take myself to have two central claims across this thread:
Your initial comment was straw manning the ‘if we build [ASI], we all die’ position.
MIRI is likely not a natural fit to consign itself to service as the neutral mouthpiece of scientific consensus.
I do not see where your most recent comment has any surface area with either of these claims.
I do want to offer some reassurance, though:
I do not take “One guy who’s thought about this for a long time and some other people he recruited think it’s definitely going to fail” to be descriptive of the MIRI comms strategy.
I think we’re talking past each other, so we’d better park it and come back to the topic later and more carefully.
I do feel like you’re misrepresenting my position, so I am going to respond and then quit there. You’re welcome to respond; I’ll try to resist carrying on, and move on to more productive things. I apologize for my somewhat argumentative tone. These are things I feel strongly about, since I think MIRIs communication might matter quite a lot, but that’s not a good reason to get argumentative.
Strawmanning: I’m afraid you’r right that I’m probably exaggerating MIRI’s claims. I don’t think it’s quite a strawman; “if we build it we all die” is very much the tone I get from MIRI comms on LW and X (mostly EY), but I do note that I haven’t seen him use 99.9%+ in some time, so maybe he’s already doing some of what I suggest. And I haven’t surveyed all of MIRIs official comms. But what we’re discussing is a change in comms strategy.
I have gotten more strident in repeated attempts to make my central point clearer. That’s my fault; you weren’t addressing my actual concern so I kept trying to highlight it. I still am not sure if you’re understanding my main point, but that’s fine; I can try to say it better in future iterations.
This is the first place I can see you suggesting that I’m exaggerating MIRIs tone, so if it’s your central concern that’s weird. But again, it’s a valid complaint; I won’t make that characterization in more public places, lest it hurt MIRI’s credibility.
MIRI claiming to accurately represent scientific consensus was never my suggestion, I don’t know where you got that. I clarified that I expect zero additional effort or strong claims, just “different experts believe a lot of different things”.
Honesty: I tried to specify from the first that I’m not suggesting dishonesty by any normal standard. Accurately reporting a (vague) range of others’ opinions is just as honest as reporting your own opinion. Not saying the least convincing part the loudest might be dishonesty by radical honesty standards, but I thought rationalists had more or less agreed that those aren’t a reasonable target. That standard of honesty would kind of conflict with having a “comms strategy” at all.
MIRI leadership is famously very wrong about how sure they think they are. That’s my concern. It’s obvious to any rationalist that it’s not rational to believe >99% in something that’s highly theoretical. It’s almost certainly epistemic hubris if not outright foolishness.
I have immense respect for EYs intellect. He seems to be the smartest human I’ve engaged with enough to judge their intellect. On this point he is either obviously or seemingly wrong. I have personally spent at least a hundred hours following his specific logic, (and lots more on the background knowledge it’s based on), and I’m personally quite sure he’s overestimating his certainty. His discussions with other experts always end up falling back on differing intuitions.He got there first, but a bunch of us have now put real time into following and extending his logic.
I have a whole theory on how he wound up so wrong, involving massive frustration and underappreciating how biased people are to short-term thinking and motivated reasoning, but that’s beside the point.
Whether he’s right doesn’t really matter; what matters is that >99.9% doom sounds crazy, and it’s really complex to argue it even could be right, let alone that it actually is.
Since it sounds crazy, leaning on that point is the very best way to harm MIRIs credibility. And because they are one of the most publicly visible advocates of AGI x-risk caution (and planning to become even higher profile it seems), it may make the whole thing sound less credible—maybe by a lot.
Please, please don’t do it or encourage others to do it.
I’m actually starting to worry that MIRI could make us worse off if they insist on shouting loudly and leaning on our least credible point. Public discourse isn’t rational, so focusing on the worst point could make the vibes-based public discussion go against what is otherwise a very simple and sane viewpoint: don’t make a smarter species unless you’re pretty sure it won’t turn on you.
Hopefully I needn’t worry, because MIRI has engaged communication experts, and they will resist just adopting EYs unreasonable doom estimate and bad comms strategy.
To your specific point: “we’re really not sure” is not a bad strategy if “we” means humanity as a whole (if by bad you mean dishonest).
If by bad you mean ineffective: do you seriously think people wouldn’t object to the push for AGI if they thought we were totally unsure?
“One guy who’s thought about this for a long time and some other people he recruited think it’s definitely going to fail” really seems like a way worse argument than “expert opinion is utterly split, so any fool can see we collectively are completely unsure it’s safe”.
By bad I mean dishonest, and by ‘we’ I mean the speaker (in this case, MIRI).
I take myself to have two central claims across this thread:
Your initial comment was straw manning the ‘if we build [ASI], we all die’ position.
MIRI is likely not a natural fit to consign itself to service as the neutral mouthpiece of scientific consensus.
I do not see where your most recent comment has any surface area with either of these claims.
I do want to offer some reassurance, though:
I do not take “One guy who’s thought about this for a long time and some other people he recruited think it’s definitely going to fail” to be descriptive of the MIRI comms strategy.
I think we’re talking past each other, so we’d better park it and come back to the topic later and more carefully.
I do feel like you’re misrepresenting my position, so I am going to respond and then quit there. You’re welcome to respond; I’ll try to resist carrying on, and move on to more productive things. I apologize for my somewhat argumentative tone. These are things I feel strongly about, since I think MIRIs communication might matter quite a lot, but that’s not a good reason to get argumentative.
Strawmanning: I’m afraid you’r right that I’m probably exaggerating MIRI’s claims. I don’t think it’s quite a strawman; “if we build it we all die” is very much the tone I get from MIRI comms on LW and X (mostly EY), but I do note that I haven’t seen him use 99.9%+ in some time, so maybe he’s already doing some of what I suggest. And I haven’t surveyed all of MIRIs official comms. But what we’re discussing is a change in comms strategy.
I have gotten more strident in repeated attempts to make my central point clearer. That’s my fault; you weren’t addressing my actual concern so I kept trying to highlight it. I still am not sure if you’re understanding my main point, but that’s fine; I can try to say it better in future iterations.
This is the first place I can see you suggesting that I’m exaggerating MIRIs tone, so if it’s your central concern that’s weird. But again, it’s a valid complaint; I won’t make that characterization in more public places, lest it hurt MIRI’s credibility.
MIRI claiming to accurately represent scientific consensus was never my suggestion, I don’t know where you got that. I clarified that I expect zero additional effort or strong claims, just “different experts believe a lot of different things”.
Honesty: I tried to specify from the first that I’m not suggesting dishonesty by any normal standard. Accurately reporting a (vague) range of others’ opinions is just as honest as reporting your own opinion. Not saying the least convincing part the loudest might be dishonesty by radical honesty standards, but I thought rationalists had more or less agreed that those aren’t a reasonable target. That standard of honesty would kind of conflict with having a “comms strategy” at all.