I think that’s a worthy ideal to strive for, and the bottleneck is simply bringing together enough different people doing intellectual work on the same topic. Then the niceties of academic freedom will happen mostly by themselves. But the premise is much harder than it seems.
LW approached that ideal for a short while, when Eliezer’s writings created a diverse flow of people and the mention of Newcomb’s problem channelled some of them into decision theory. It was a fun time and I’m happy to have been part of it. Then Eliezer stopped posting fun stuff for a wide audience, the flow of people started drying up, new ideas became scarce due to lack of outsiders, and the work became more intensely mathematical and shrank to a small core group (MIRI workshops and agentfoundations.org). Now it’s mostly met with crickets, and the opportunity for outsiders to make LWish philosophical progress and be rewarded with attention is pretty much gone, even though there’s plenty of low hanging fruit. I’m sorry to say I also contributed to this “professionalization”, which might have been a mistake in retrospect.
A couple days ago, after two years of silence, I wrote a short LW post about probabilities to test the waters. It got a very good reception, showing that people are still interested in this stuff. But to jumpstart such an effort properly, we need to sell amateurs on some way to do important yet accessible intellectual work. I don’t know how to do that.
Scott Garrabrant and I would be happy to see more engagement with the content on Agent Foundations (IAF). I guess you’re right that the math is a barrier. My own recent experiment of linking to Two Major Obstacles for Logical Inductor Decision Theory on IAF was much less successful than your post about betting, but I think that there’s something inessential about the inaccessiblity.
In that post, for example, I think the math used is mostly within reach for a technical lay audience, except that an understanding of logical induction is assumed, though I may have missed some complexity in looking it over just now. Even for that, it should be possible to explain enough about logical inductors briefly and accessibly enough to let someone understand a version of that post, though I’m not sure if that has been done. People recommend this talk as the best existing introduction.
Note that I played a part in convincing MIRI to create IAF, and wrote the only comment on the IAF post you linked, so rest assured that I’m watching you folks :-) My thinking has changed over time though, and probably diverged from yours. I’ll lay it out here, hopefully it won’t sound too harsh.
First of all, if your goal is explaining math using simpler math, I think there’s a better way to do it. In a good math explanation, you formulate an interesting problem at level n whose solution requires level n+1. (Ideally n should be as low as possible.) In a bad math explanation, you assume the reader understands level n, then write out the basic definitions of level n+1 and formulate a problem using those. That loses the reader, unless they are already interested in level n+1.
But that’s still underestimating the problem by a couple orders of magnitude. To jumpstart engagement, you need something as powerful as this old post by Eliezer. That’s a much more complicated beast. The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn. Coming back to that post now, I’m still in awe of how the little gears work, from the opening sentence to the “win” mantra to the hint that he knows the solution but ain’t telling. It hits a tiny target in manipulation-space that people don’t see clearly even now, after living for a decade inside the research program that it created.
Apart from finding the right problem and distilling it in the right manner, I think the next hardest part is plain old writing style. For example, Eliezer uses lots of poetic language and sounds slightly overconfident, staying mostly in control but leaving dozens of openings for readers to react. But you can’t reuse his style today, the audience has changed and you’ll sound phony. You need to be in tune with readers in your own way. If I knew how to do it, I’d be doing it already. These comments of mine are more like meta-manipulation aimed at people like you, so I can avoid learning to write :-)
Note that I … wrote the only comment on the IAF post you linked
Yes, I replied to it :)
Unfortunately, I don’t expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn’t seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I’m not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.
I think Eliezer’s Arbital stuff would’ve been popular in blog form. (Converting it to a blog now won’t work, the intrigue is gone.) The sequences had lots of similar quality material, like “Created already in motion”. I don’t like it much because it’s so far out, but it gets readers.
The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn.
I don’t think that’s a matter of writing style. It’s a matter of whether the prospective “research area” is simple enough that all of its general prerequisites can be stated in a popular blogpost, and otherwise be assumed to be known to the reader. (For example, many OvercomingBias/LessWrong readers have enough of a background in rational-action theory to know what “precommitment” and “dynamic inconsistency” mean, and these notions are indeed necessary for a proper understanding of EY’s point.) At one point, that was true of the general area of timeless/updateless decision theory. It seems to be less true of the logical induction problem.
I think logical induction could’ve been popularized with just as much effort (that is, a lot). For example, the second problem from the post linked by endoself was discussed by Wei and me in 2012, with >40 comments each. If we’d been better at mass appeal, instead of coasting on the audience attracted by Eliezer, we could’ve had even more engagement. (Note the comment from thescoundrel in the second link, that’s the kind of good idea out of nowhere that mass appeal is all about.)
Does popularization produce the goods? Lots of people have the background and skill to contribute to this problem who aren’t currently in our community and don’t have day jobs.
Choosing the right problem is certainly important, but I don’t think it’s the bottleneck. There’s plenty of low hanging fruit. Knowing how to play your audience seems like more of a bottleneck, and it takes a lot of effort to learn.
I lack motivation myself. I’m interested in AIrisk but I think exploring abstract decision theories where the costs of doing the computation to make the decision are ignored is like trying to build a vehicle and ignoring drag entirely.
I may well be wrong so I still skim the agent foundations stuff, but I am unconvinced of its practicality. So I’m unlikely to be commenting on it or participating in that.
Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.
I think that’s a worthy ideal to strive for, and the bottleneck is simply bringing together enough different people doing intellectual work on the same topic. Then the niceties of academic freedom will happen mostly by themselves. But the premise is much harder than it seems.
LW approached that ideal for a short while, when Eliezer’s writings created a diverse flow of people and the mention of Newcomb’s problem channelled some of them into decision theory. It was a fun time and I’m happy to have been part of it. Then Eliezer stopped posting fun stuff for a wide audience, the flow of people started drying up, new ideas became scarce due to lack of outsiders, and the work became more intensely mathematical and shrank to a small core group (MIRI workshops and agentfoundations.org). Now it’s mostly met with crickets, and the opportunity for outsiders to make LWish philosophical progress and be rewarded with attention is pretty much gone, even though there’s plenty of low hanging fruit. I’m sorry to say I also contributed to this “professionalization”, which might have been a mistake in retrospect.
A couple days ago, after two years of silence, I wrote a short LW post about probabilities to test the waters. It got a very good reception, showing that people are still interested in this stuff. But to jumpstart such an effort properly, we need to sell amateurs on some way to do important yet accessible intellectual work. I don’t know how to do that.
Scott Garrabrant and I would be happy to see more engagement with the content on Agent Foundations (IAF). I guess you’re right that the math is a barrier. My own recent experiment of linking to Two Major Obstacles for Logical Inductor Decision Theory on IAF was much less successful than your post about betting, but I think that there’s something inessential about the inaccessiblity.
In that post, for example, I think the math used is mostly within reach for a technical lay audience, except that an understanding of logical induction is assumed, though I may have missed some complexity in looking it over just now. Even for that, it should be possible to explain enough about logical inductors briefly and accessibly enough to let someone understand a version of that post, though I’m not sure if that has been done. People recommend this talk as the best existing introduction.
Note that I played a part in convincing MIRI to create IAF, and wrote the only comment on the IAF post you linked, so rest assured that I’m watching you folks :-) My thinking has changed over time though, and probably diverged from yours. I’ll lay it out here, hopefully it won’t sound too harsh.
First of all, if your goal is explaining math using simpler math, I think there’s a better way to do it. In a good math explanation, you formulate an interesting problem at level n whose solution requires level n+1. (Ideally n should be as low as possible.) In a bad math explanation, you assume the reader understands level n, then write out the basic definitions of level n+1 and formulate a problem using those. That loses the reader, unless they are already interested in level n+1.
But that’s still underestimating the problem by a couple orders of magnitude. To jumpstart engagement, you need something as powerful as this old post by Eliezer. That’s a much more complicated beast. The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn. Coming back to that post now, I’m still in awe of how the little gears work, from the opening sentence to the “win” mantra to the hint that he knows the solution but ain’t telling. It hits a tiny target in manipulation-space that people don’t see clearly even now, after living for a decade inside the research program that it created.
Apart from finding the right problem and distilling it in the right manner, I think the next hardest part is plain old writing style. For example, Eliezer uses lots of poetic language and sounds slightly overconfident, staying mostly in control but leaving dozens of openings for readers to react. But you can’t reuse his style today, the audience has changed and you’ll sound phony. You need to be in tune with readers in your own way. If I knew how to do it, I’d be doing it already. These comments of mine are more like meta-manipulation aimed at people like you, so I can avoid learning to write :-)
Yes, I replied to it :)
Unfortunately, I don’t expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn’t seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I’m not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.
I think Eliezer’s Arbital stuff would’ve been popular in blog form. (Converting it to a blog now won’t work, the intrigue is gone.) The sequences had lots of similar quality material, like “Created already in motion”. I don’t like it much because it’s so far out, but it gets readers.
I don’t think that’s a matter of writing style. It’s a matter of whether the prospective “research area” is simple enough that all of its general prerequisites can be stated in a popular blogpost, and otherwise be assumed to be known to the reader. (For example, many OvercomingBias/LessWrong readers have enough of a background in rational-action theory to know what “precommitment” and “dynamic inconsistency” mean, and these notions are indeed necessary for a proper understanding of EY’s point.) At one point, that was true of the general area of timeless/updateless decision theory. It seems to be less true of the logical induction problem.
I think logical induction could’ve been popularized with just as much effort (that is, a lot). For example, the second problem from the post linked by endoself was discussed by Wei and me in 2012, with >40 comments each. If we’d been better at mass appeal, instead of coasting on the audience attracted by Eliezer, we could’ve had even more engagement. (Note the comment from thescoundrel in the second link, that’s the kind of good idea out of nowhere that mass appeal is all about.)
Does popularization produce the goods? Lots of people have the background and skill to contribute to this problem who aren’t currently in our community and don’t have day jobs.
Choosing the right problem is certainly important, but I don’t think it’s the bottleneck. There’s plenty of low hanging fruit. Knowing how to play your audience seems like more of a bottleneck, and it takes a lot of effort to learn.
I lack motivation myself. I’m interested in AIrisk but I think exploring abstract decision theories where the costs of doing the computation to make the decision are ignored is like trying to build a vehicle and ignoring drag entirely.
I may well be wrong so I still skim the agent foundations stuff, but I am unconvinced of its practicality. So I’m unlikely to be commenting on it or participating in that.
Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.