What I should have written is: “it’s not clear that additional expository work, of the kind we can easily purchase, is of high value...” (I’ve changed the text now.) What I had in mind, there, is the very basic stuff that is relatively easy to purchase because 20+ people can write it, and some of them are available and willing to help us out on the cheap. But like I say in the post, I’m not sure that additional exposition on the super-basics is of high value after the stuff we’ve already done and Bostrom’s book. (BTW, we’ve got another super-basics ebook coming out soon that we paid Stuart Armstrong to write, but IIRC that was written before the strategy post.)
I don’t remember this far back, but my guess is that I left out the clarification so as to avoid “death by a thousand qualifications and clarifications.” But looking back, it does seem like a clarification that should have gone in anyway, so I’m sorry about any confusion caused by its absence.
See, explaining is hard. :)
My point is that publicly available valuable strategic research on AI risk reduction isn’t that difficult to purchase, and that’s what we need
Turning completed but publicly unavailable strategic research into publicly available strategic research is very difficult to purchase. I tried, many many times. Paying Eliezer and Carl more would not cause them to write things up any faster. Paying people to talk to them and write up notes mostly didn’t work, unless the person writing it up was me, but I was busy running the organization. I think there are people who could schedule a 1-hour chat with Eliezer or Carl, take a bunch of notes, and then write up something good, but those people are rarer than you might expect, and so skilled that they’re already busy doing other high value work, like Nick Beckstead at FHI.
In any case, “turning completed but publicly unavailable strategic research into publicly available strategic research” is what I was calling “expository research” in that post, not what I was calling “strategic research.”
I should also remind everyone reading this that it’s not as if Eliezer & Carl have been sitting around doing nothing instead of writing up their strategy knowledge. First, they’ve done some strategy writing. Second, they’ve been doing other high-value work. Right now we’re talking about, and elevating the apparent importance of, strategy exposition. But if we were having a conversation about the importance of community-building and fundraising, it might feel obvious that it was better for Eliezer to spend some time this summer to write more HPMoR rather than write up strategy exposition. “HPMoR” is now the single most common answer I get when I ask newly useful people (e.g. donors, workshop participants) how they found MIRI and came to care about its work.
Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
Still, you’re right that I don’t want to discourage people from doing strategy work. There are places people can contribute to the cutting edge of our strategic understanding without needing to rediscover what the experts have already discovered (see one example below).
The fact that you’ve tried many different things and failed to get them to write stuff down faster argues more strongly for this, since it means we can’t expect them to write much stuff down in the foreseeable future.
I’m not so sure. I mean, the work is getting out there, in MIRI blog posts, in stuff that FHI is writing, etc. - it’s just not coming out as quickly as any of us would like. There’s enough out there already such that people could contribute to the cutting edge of our understanding if they wanted to, and had the ability and resources to do so. E.g. Eliezer’s IEM write-up + Katja’s tech report describe in great detail what data could be collected and organized to improve our understanding of IEM, and Katja has more notes she could send along to anyone who wanted to do this and asked for her advice.
I don’t see a compelling argument that getting academic traction for FAI math research is of net positive impact and of similar magnitude compared to getting academic traction for strategic research, so the fact that it’s easier to do isn’t a compelling argument for preferring it over strategic research.
Right, this seems to go back to that other disagreement that we’re meeting about when you arrive.
Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
I read the idea as being that people rediscovering and writing up stuff that goes 5% towards what E/C/N have already figured out but haven’t written down would be a net positive and it’s a bad idea to discourage this. It seems like there’s something to that, to the degree that getting the existing stuff written up isn’t an available option—increasing the level of publicly available strategic research could be useful even if the vast majority of it doesn’t advance the state of the art, if it leads to many more people vetting it in the long run. I do think there is probably a tradeoff, where Eliezer &c might not be motivated to comment on other people’s posts all that much, making it difficult to see what is the current state of the art and what are ideas that the poster just hasn’t figured out the straight-forward counter-arguments to. I don’t know how to deal with that, but encouraging discussion that is high quality compared to currently publicly available strategy work still seems quite likely to be a net positive?
One way to accelerate the production of strategy exposition is to lower one’s standards. It’s much easier to sketch one’s quick thoughts on an issue than it is to write a well-organized, clearly-expressed, well-referenced, reader-tested analysis (like When Will AI Be Created?), and this is often enough to provoke some productive debate (at least on Less Wrong). See e.g. Reply to Holden on Tool AI and Do Earths with slower economic growth have a better chance at FAI?.
So, in the next few days I’ll post my “quick and dirty” thoughts on one strategic issue (IA and FAI) to LW Discussion, and see what comes of it.
Glad to hear that & looking forward to seeing how it works! I very much understand that one might be concerned about posting “quick and dirty” thoughts (I find it so very difficult to lower my own standards even when it’s obviously blocking me from getting stuff done), but there seems to be little cost of trying it with a Discussion post and seeing how it goes—yay value of information! :-)
Ah. Yes.
What I should have written is: “it’s not clear that additional expository work, of the kind we can easily purchase, is of high value...” (I’ve changed the text now.) What I had in mind, there, is the very basic stuff that is relatively easy to purchase because 20+ people can write it, and some of them are available and willing to help us out on the cheap. But like I say in the post, I’m not sure that additional exposition on the super-basics is of high value after the stuff we’ve already done and Bostrom’s book. (BTW, we’ve got another super-basics ebook coming out soon that we paid Stuart Armstrong to write, but IIRC that was written before the strategy post.)
I don’t remember this far back, but my guess is that I left out the clarification so as to avoid “death by a thousand qualifications and clarifications.” But looking back, it does seem like a clarification that should have gone in anyway, so I’m sorry about any confusion caused by its absence.
See, explaining is hard. :)
Turning completed but publicly unavailable strategic research into publicly available strategic research is very difficult to purchase. I tried, many many times. Paying Eliezer and Carl more would not cause them to write things up any faster. Paying people to talk to them and write up notes mostly didn’t work, unless the person writing it up was me, but I was busy running the organization. I think there are people who could schedule a 1-hour chat with Eliezer or Carl, take a bunch of notes, and then write up something good, but those people are rarer than you might expect, and so skilled that they’re already busy doing other high value work, like Nick Beckstead at FHI.
In any case, “turning completed but publicly unavailable strategic research into publicly available strategic research” is what I was calling “expository research” in that post, not what I was calling “strategic research.”
I should also remind everyone reading this that it’s not as if Eliezer & Carl have been sitting around doing nothing instead of writing up their strategy knowledge. First, they’ve done some strategy writing. Second, they’ve been doing other high-value work. Right now we’re talking about, and elevating the apparent importance of, strategy exposition. But if we were having a conversation about the importance of community-building and fundraising, it might feel obvious that it was better for Eliezer to spend some time this summer to write more HPMoR rather than write up strategy exposition. “HPMoR” is now the single most common answer I get when I ask newly useful people (e.g. donors, workshop participants) how they found MIRI and came to care about its work.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
Still, you’re right that I don’t want to discourage people from doing strategy work. There are places people can contribute to the cutting edge of our strategic understanding without needing to rediscover what the experts have already discovered (see one example below).
I’m not so sure. I mean, the work is getting out there, in MIRI blog posts, in stuff that FHI is writing, etc. - it’s just not coming out as quickly as any of us would like. There’s enough out there already such that people could contribute to the cutting edge of our understanding if they wanted to, and had the ability and resources to do so. E.g. Eliezer’s IEM write-up + Katja’s tech report describe in great detail what data could be collected and organized to improve our understanding of IEM, and Katja has more notes she could send along to anyone who wanted to do this and asked for her advice.
Right, this seems to go back to that other disagreement that we’re meeting about when you arrive.
I read the idea as being that people rediscovering and writing up stuff that goes 5% towards what E/C/N have already figured out but haven’t written down would be a net positive and it’s a bad idea to discourage this. It seems like there’s something to that, to the degree that getting the existing stuff written up isn’t an available option—increasing the level of publicly available strategic research could be useful even if the vast majority of it doesn’t advance the state of the art, if it leads to many more people vetting it in the long run. I do think there is probably a tradeoff, where Eliezer &c might not be motivated to comment on other people’s posts all that much, making it difficult to see what is the current state of the art and what are ideas that the poster just hasn’t figured out the straight-forward counter-arguments to. I don’t know how to deal with that, but encouraging discussion that is high quality compared to currently publicly available strategy work still seems quite likely to be a net positive?
One way to accelerate the production of strategy exposition is to lower one’s standards. It’s much easier to sketch one’s quick thoughts on an issue than it is to write a well-organized, clearly-expressed, well-referenced, reader-tested analysis (like When Will AI Be Created?), and this is often enough to provoke some productive debate (at least on Less Wrong). See e.g. Reply to Holden on Tool AI and Do Earths with slower economic growth have a better chance at FAI?.
So, in the next few days I’ll post my “quick and dirty” thoughts on one strategic issue (IA and FAI) to LW Discussion, and see what comes of it.
Glad to hear that & looking forward to seeing how it works! I very much understand that one might be concerned about posting “quick and dirty” thoughts (I find it so very difficult to lower my own standards even when it’s obviously blocking me from getting stuff done), but there seems to be little cost of trying it with a Discussion post and seeing how it goes—yay value of information! :-)
The experiment seems to have failed.
Drats. But also, yay, information! Thanks for trying this!
ETA: Worth noting that I found that post useful, though.