Is it just me, or is the situation of Eliezer and Carl having thought of all of these things but never written them down anywhere crazy? If Eliezer and Carl are unwilling or unable to write down their ideas, then the rest of us have no choice but to try to do strategy work ourselves even if we have to retrace a lot of their steps. The alternative is to for us to go through the Singularity with only two or three people having thought deeply about how best to make it turn out well. It’s hard to imagine getting a good outcome while the world is simultaneously that crazy.
I guess my suggestion to you is that if you agree with me that we need a vibrant community of talented people studying and openly debating what is the best strategy for achieving a positive Singularity, then MIRI ought to be putting more effort into this goal. If it encounters problems like Eliezer and Carl being too slow to write down their ideas, then it should make a greater effort to solve such problems or to work around them, like encouraging independent outside work, holding workshops to attract more attention to strategic problems, or trying to convince specific individuals to turn their attention to strategy.
Since you’re in town in a few days, Eliezer suggested that we three chat about this when you get here. See you soon!
While I look forward to talking to Eliezer and you, I do have a concern, namely that I find Eliezer to be much better (either more natively talented, or more practiced, probably both) than I am at making arguments in real time, while I tend to be better able to hold my own in offline formats like email/blog discussions where I can take my time to figure out what points I want to make. So keep that in mind if the chat ends up being really one-sided.
Is it just me, or is the situation of Eliezer and Carl having thought of all of these things but never written them down anywhere crazy?
You’re preaching to the choir, here...
And you might be underestimating how many different things I tried, to encourage various experts to write things up at a faster pace.
As for why we aren’t spending more resources on strategy work, I refer you to all my previous links and points about that in this thread. Perhaps there are specific parts of my case that you don’t find compelling?
But it’s not clear that additional expository work is of high value after (1) the expository work MIRI and others have done so far, (2) Sotala & Yampolskiy’s forthcoming survey article on proposals for handling AI risk, and (3) Bostrom’s forthcoming book on machine superintelligence. Thus, we decided to not invest much in expository research in 2013.
Seems a bit inconsistent?
Perhaps there are specific parts of my case that you don’t find compelling?
Yes, again quoting from that strategy post:
Valuable strategic research on AI risk reduction is difficult to purchase. Very few people have the degree of domain knowledge and analytic ability to contribute. Moreover, it’s difficult for others to “catch up,” because most of the analysis that has been done hasn’t been written up clearly. (Bostrom’s book should help with that, though.)
My point is that publicly available valuable strategic research on AI risk reduction isn’t that difficult to purchase, and that’s what we need. All that information locked up in Eliezer and Carl’s heads isn’t doing much good as far as contributing to the building of a vibrant research community on Singularity strategies. (I would argue it’s not even very good for guiding MIRI’s own strategy since it’s not available for external review/vetting.) To create new publicly available strategic research, we don’t need people to catch up to their level, just to catch up to whatever is publicly available now. (Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.) The fact that you’ve tried many different things and failed to get them to write stuff down faster argues more strongly for this, since it means we can’t expect them to write much stuff down in the foreseeable future.
Math research can get academic “traction” more easily than strategic research can.
I don’t see a compelling argument that getting academic traction for FAI math research is of net positive impact and of similar magnitude compared to getting academic traction for strategic research, so the fact that it’s easier to do isn’t a compelling argument for preferring it over strategic research.
What I should have written is: “it’s not clear that additional expository work, of the kind we can easily purchase, is of high value...” (I’ve changed the text now.) What I had in mind, there, is the very basic stuff that is relatively easy to purchase because 20+ people can write it, and some of them are available and willing to help us out on the cheap. But like I say in the post, I’m not sure that additional exposition on the super-basics is of high value after the stuff we’ve already done and Bostrom’s book. (BTW, we’ve got another super-basics ebook coming out soon that we paid Stuart Armstrong to write, but IIRC that was written before the strategy post.)
I don’t remember this far back, but my guess is that I left out the clarification so as to avoid “death by a thousand qualifications and clarifications.” But looking back, it does seem like a clarification that should have gone in anyway, so I’m sorry about any confusion caused by its absence.
See, explaining is hard. :)
My point is that publicly available valuable strategic research on AI risk reduction isn’t that difficult to purchase, and that’s what we need
Turning completed but publicly unavailable strategic research into publicly available strategic research is very difficult to purchase. I tried, many many times. Paying Eliezer and Carl more would not cause them to write things up any faster. Paying people to talk to them and write up notes mostly didn’t work, unless the person writing it up was me, but I was busy running the organization. I think there are people who could schedule a 1-hour chat with Eliezer or Carl, take a bunch of notes, and then write up something good, but those people are rarer than you might expect, and so skilled that they’re already busy doing other high value work, like Nick Beckstead at FHI.
In any case, “turning completed but publicly unavailable strategic research into publicly available strategic research” is what I was calling “expository research” in that post, not what I was calling “strategic research.”
I should also remind everyone reading this that it’s not as if Eliezer & Carl have been sitting around doing nothing instead of writing up their strategy knowledge. First, they’ve done some strategy writing. Second, they’ve been doing other high-value work. Right now we’re talking about, and elevating the apparent importance of, strategy exposition. But if we were having a conversation about the importance of community-building and fundraising, it might feel obvious that it was better for Eliezer to spend some time this summer to write more HPMoR rather than write up strategy exposition. “HPMoR” is now the single most common answer I get when I ask newly useful people (e.g. donors, workshop participants) how they found MIRI and came to care about its work.
Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
Still, you’re right that I don’t want to discourage people from doing strategy work. There are places people can contribute to the cutting edge of our strategic understanding without needing to rediscover what the experts have already discovered (see one example below).
The fact that you’ve tried many different things and failed to get them to write stuff down faster argues more strongly for this, since it means we can’t expect them to write much stuff down in the foreseeable future.
I’m not so sure. I mean, the work is getting out there, in MIRI blog posts, in stuff that FHI is writing, etc. - it’s just not coming out as quickly as any of us would like. There’s enough out there already such that people could contribute to the cutting edge of our understanding if they wanted to, and had the ability and resources to do so. E.g. Eliezer’s IEM write-up + Katja’s tech report describe in great detail what data could be collected and organized to improve our understanding of IEM, and Katja has more notes she could send along to anyone who wanted to do this and asked for her advice.
I don’t see a compelling argument that getting academic traction for FAI math research is of net positive impact and of similar magnitude compared to getting academic traction for strategic research, so the fact that it’s easier to do isn’t a compelling argument for preferring it over strategic research.
Right, this seems to go back to that other disagreement that we’re meeting about when you arrive.
Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
I read the idea as being that people rediscovering and writing up stuff that goes 5% towards what E/C/N have already figured out but haven’t written down would be a net positive and it’s a bad idea to discourage this. It seems like there’s something to that, to the degree that getting the existing stuff written up isn’t an available option—increasing the level of publicly available strategic research could be useful even if the vast majority of it doesn’t advance the state of the art, if it leads to many more people vetting it in the long run. I do think there is probably a tradeoff, where Eliezer &c might not be motivated to comment on other people’s posts all that much, making it difficult to see what is the current state of the art and what are ideas that the poster just hasn’t figured out the straight-forward counter-arguments to. I don’t know how to deal with that, but encouraging discussion that is high quality compared to currently publicly available strategy work still seems quite likely to be a net positive?
One way to accelerate the production of strategy exposition is to lower one’s standards. It’s much easier to sketch one’s quick thoughts on an issue than it is to write a well-organized, clearly-expressed, well-referenced, reader-tested analysis (like When Will AI Be Created?), and this is often enough to provoke some productive debate (at least on Less Wrong). See e.g. Reply to Holden on Tool AI and Do Earths with slower economic growth have a better chance at FAI?.
So, in the next few days I’ll post my “quick and dirty” thoughts on one strategic issue (IA and FAI) to LW Discussion, and see what comes of it.
Glad to hear that & looking forward to seeing how it works! I very much understand that one might be concerned about posting “quick and dirty” thoughts (I find it so very difficult to lower my own standards even when it’s obviously blocking me from getting stuff done), but there seems to be little cost of trying it with a Discussion post and seeing how it goes—yay value of information! :-)
Is it just me, or is the situation of Eliezer and Carl having thought of all of these things but never written them down anywhere crazy? If Eliezer and Carl are unwilling or unable to write down their ideas, then the rest of us have no choice but to try to do strategy work ourselves even if we have to retrace a lot of their steps. The alternative is to for us to go through the Singularity with only two or three people having thought deeply about how best to make it turn out well. It’s hard to imagine getting a good outcome while the world is simultaneously that crazy.
I guess my suggestion to you is that if you agree with me that we need a vibrant community of talented people studying and openly debating what is the best strategy for achieving a positive Singularity, then MIRI ought to be putting more effort into this goal. If it encounters problems like Eliezer and Carl being too slow to write down their ideas, then it should make a greater effort to solve such problems or to work around them, like encouraging independent outside work, holding workshops to attract more attention to strategic problems, or trying to convince specific individuals to turn their attention to strategy.
While I look forward to talking to Eliezer and you, I do have a concern, namely that I find Eliezer to be much better (either more natively talented, or more practiced, probably both) than I am at making arguments in real time, while I tend to be better able to hold my own in offline formats like email/blog discussions where I can take my time to figure out what points I want to make. So keep that in mind if the chat ends up being really one-sided.
You’re preaching to the choir, here...
And you might be underestimating how many different things I tried, to encourage various experts to write things up at a faster pace.
As for why we aren’t spending more resources on strategy work, I refer you to all my previous links and points about that in this thread. Perhaps there are specific parts of my case that you don’t find compelling?
But here’s what you said in the 2013 strategy post:
Seems a bit inconsistent?
Yes, again quoting from that strategy post:
My point is that publicly available valuable strategic research on AI risk reduction isn’t that difficult to purchase, and that’s what we need. All that information locked up in Eliezer and Carl’s heads isn’t doing much good as far as contributing to the building of a vibrant research community on Singularity strategies. (I would argue it’s not even very good for guiding MIRI’s own strategy since it’s not available for external review/vetting.) To create new publicly available strategic research, we don’t need people to catch up to their level, just to catch up to whatever is publicly available now. (Note that you’re wrongly discouraging people from doing strategy research by saying that they need to catch up to insiders’ unpublished knowledge when they really don’t.) The fact that you’ve tried many different things and failed to get them to write stuff down faster argues more strongly for this, since it means we can’t expect them to write much stuff down in the foreseeable future.
I don’t see a compelling argument that getting academic traction for FAI math research is of net positive impact and of similar magnitude compared to getting academic traction for strategic research, so the fact that it’s easier to do isn’t a compelling argument for preferring it over strategic research.
Ah. Yes.
What I should have written is: “it’s not clear that additional expository work, of the kind we can easily purchase, is of high value...” (I’ve changed the text now.) What I had in mind, there, is the very basic stuff that is relatively easy to purchase because 20+ people can write it, and some of them are available and willing to help us out on the cheap. But like I say in the post, I’m not sure that additional exposition on the super-basics is of high value after the stuff we’ve already done and Bostrom’s book. (BTW, we’ve got another super-basics ebook coming out soon that we paid Stuart Armstrong to write, but IIRC that was written before the strategy post.)
I don’t remember this far back, but my guess is that I left out the clarification so as to avoid “death by a thousand qualifications and clarifications.” But looking back, it does seem like a clarification that should have gone in anyway, so I’m sorry about any confusion caused by its absence.
See, explaining is hard. :)
Turning completed but publicly unavailable strategic research into publicly available strategic research is very difficult to purchase. I tried, many many times. Paying Eliezer and Carl more would not cause them to write things up any faster. Paying people to talk to them and write up notes mostly didn’t work, unless the person writing it up was me, but I was busy running the organization. I think there are people who could schedule a 1-hour chat with Eliezer or Carl, take a bunch of notes, and then write up something good, but those people are rarer than you might expect, and so skilled that they’re already busy doing other high value work, like Nick Beckstead at FHI.
In any case, “turning completed but publicly unavailable strategic research into publicly available strategic research” is what I was calling “expository research” in that post, not what I was calling “strategic research.”
I should also remind everyone reading this that it’s not as if Eliezer & Carl have been sitting around doing nothing instead of writing up their strategy knowledge. First, they’ve done some strategy writing. Second, they’ve been doing other high-value work. Right now we’re talking about, and elevating the apparent importance of, strategy exposition. But if we were having a conversation about the importance of community-building and fundraising, it might feel obvious that it was better for Eliezer to spend some time this summer to write more HPMoR rather than write up strategy exposition. “HPMoR” is now the single most common answer I get when I ask newly useful people (e.g. donors, workshop participants) how they found MIRI and came to care about its work.
What makes you say that? I believe you can reinvent much of what Eliezer and Carl and Bostrom and a few others already know but haven’t written down. Not sure that’s true for almost most everyone else.
Still, you’re right that I don’t want to discourage people from doing strategy work. There are places people can contribute to the cutting edge of our strategic understanding without needing to rediscover what the experts have already discovered (see one example below).
I’m not so sure. I mean, the work is getting out there, in MIRI blog posts, in stuff that FHI is writing, etc. - it’s just not coming out as quickly as any of us would like. There’s enough out there already such that people could contribute to the cutting edge of our understanding if they wanted to, and had the ability and resources to do so. E.g. Eliezer’s IEM write-up + Katja’s tech report describe in great detail what data could be collected and organized to improve our understanding of IEM, and Katja has more notes she could send along to anyone who wanted to do this and asked for her advice.
Right, this seems to go back to that other disagreement that we’re meeting about when you arrive.
I read the idea as being that people rediscovering and writing up stuff that goes 5% towards what E/C/N have already figured out but haven’t written down would be a net positive and it’s a bad idea to discourage this. It seems like there’s something to that, to the degree that getting the existing stuff written up isn’t an available option—increasing the level of publicly available strategic research could be useful even if the vast majority of it doesn’t advance the state of the art, if it leads to many more people vetting it in the long run. I do think there is probably a tradeoff, where Eliezer &c might not be motivated to comment on other people’s posts all that much, making it difficult to see what is the current state of the art and what are ideas that the poster just hasn’t figured out the straight-forward counter-arguments to. I don’t know how to deal with that, but encouraging discussion that is high quality compared to currently publicly available strategy work still seems quite likely to be a net positive?
One way to accelerate the production of strategy exposition is to lower one’s standards. It’s much easier to sketch one’s quick thoughts on an issue than it is to write a well-organized, clearly-expressed, well-referenced, reader-tested analysis (like When Will AI Be Created?), and this is often enough to provoke some productive debate (at least on Less Wrong). See e.g. Reply to Holden on Tool AI and Do Earths with slower economic growth have a better chance at FAI?.
So, in the next few days I’ll post my “quick and dirty” thoughts on one strategic issue (IA and FAI) to LW Discussion, and see what comes of it.
Glad to hear that & looking forward to seeing how it works! I very much understand that one might be concerned about posting “quick and dirty” thoughts (I find it so very difficult to lower my own standards even when it’s obviously blocking me from getting stuff done), but there seems to be little cost of trying it with a Discussion post and seeing how it goes—yay value of information! :-)
The experiment seems to have failed.
Drats. But also, yay, information! Thanks for trying this!
ETA: Worth noting that I found that post useful, though.