If I take your point very generally, I feel like you’re re-stating Eliezer’s Archimedes Chronophone in a cool, different way. It’s saying “To discover new value, you don’t just need to the same thing but rigorously—you need to be able to get outside your incentives and patterns of thought and have ideas that cause you to pull the rope sideways.”
If I take your point very literally, it seems false. For example I think that trying to practically understand basic economics / game theory very rigorously gets you really far (cf. Eliezer’s post “Moloch’s Toolbox”, Bryan Caplan’s work on education and voter theory, etc). Trying to do statistics on the QALY output of differing health interventions in the developing world gets you important insights (cf. Toby Ord’s “On the moral imperative toward cost-effectiveness”).
(This comment is responding to your points about theory, not about the specific cognitive models being used within EA, which is a separate story.)
This didn’t seem like it really addressed the point Qiaochu was making.
My guess is Qiaochu agrees that “economics and game theory can get you really far.” But, would it specifically lead to EAs in the described dystopia discovering the value of sex and romance, and valuing them appropriately? I’m not sure if you’re claiming it would, or that claiming that it doesn’t matter especially, or something else.
Brief response to the second question, will see if I can come back and attempt an ITT later.
QALYs don’t matter much (I think), but learning that they’re power-law distributed matters a lot. It’s a datapoint for the general point, akin to Paul Graham’s essay on black-swan farming—it tells you that value is power law distributed and that figuring out what’s the most valuable thing is super-duper important, relative to how you might otherwise have been modelling the world.
Okay, let me rephrase my ITT request; what I meant to say is that I’m confused, because your chronophone analogy is pretty close to the point I’m trying to make, but I don’t understand what you have in mind for my point taken “very literally” because I don’t understand how your second paragraph is a response to what I wrote.
I’m pretty wary of taking this power-law methodology seriously. If you think importance is power-law distributed and your estimates of what’s important are multiplicatively noisy, then the top of your list of things sorted by importance will mostly be dominated by noise. This is basically my reaction to the SSC post on EAG 2017; I think basically every weird position he describes in that post is a philosophical mistake (e.g. the wild animal suffering people seem to be using a form of total utilitarianism, which I reject completely).
First off, I’d like to apologise, I’ve only now read the OP, and I was talking past you. I still think we have some genuine disagreements however, so I’ll try to clarify those.
I was surprised by how much I liked the post. It separates the following approaches for improving the state of the world:
Further our understanding of what matters
Improve governance
Improve prediction-making & foresight
Reduce existential risk
Increase the number of well-intentioned, highly capable people
My ordering of tractability on these is roughly as follows: 5 > 4 > 3 > 2 > 1. There is then a question of importance and neglectedness. I basically think they’re all fairly neglected, and I don’t have strong opinions on scope except that, probably, number 4 is slightly higher than the rest.
Understanding what matters seems really hard. For the other problems (2-5) I can see strong feedback loops based in math for people to learn (e.g. in governance, people can study microeconomic models, play with them, make predictions and learn). I don’t see this for problem 1.
Sure, for all 1-5 there will be steps where you have to step sideways—notice a key variable that everyone has been avoiding even thinking about, due to various incentives on what thoughts you can think—but there’s more scope for practice, and for a lot of good work to be done that isn’t 80% deep philosophical insight / deep rationality abilities.
Luke Muehlhauser tried really hard and found the tractability surprisingly low (1, 2).
There are worlds consistent with what I’ve said above, where I would nonetheless want to devote significant resources to 1, if (say) we were making plans on a 100-200 year horizon. However, I believe that we’re in a world where we’re very soon going to have to score perfectly on number 4, and furthermore that scoring perfectly on number 4 will cause the other problems to get much easier—including problem number 1.
Summary: As I hadn’t read the OP, I read your comment as claiming your approach to 1 was the only way to do good altruistic work. I then responded with reasons to think that other—more technical approaches—were just as good (especially for the other problems). I now pivot my response to reasons to think that working on things that aren’t 1 are more important, which I think we may disagree on.
I think I basically agree with all of this, except that maybe I think 1 is somewhat more tractable than you do. What I wrote was mostly a response to the OP’s listing of organizations working on 1, and my sense that the OP thought that these organizations were / are making positive progress, which is far from clear to me.
I can’t tell if it’s still necessary, but I wanted to try anyway. Here’s my ITT for your position. This story isn’t literally what I think you think, but I believe it’s an accurate analogy.
---
You have recently done something new, like tried LSD, or gotten really addicted to the high of exercise, or just moved to a new place and been genuinley happy and excited every minute for a whole week.
And you learned something—you didn’t know life could feel this good. It’s a genuine insight and feels important to you, and does change how you will make life plans forevermore.
You talk to me. I talk about the importance of figuring out what really matters in life; I’ve just spent many weeks reading textbooks on the subject and philosophical treatise on experience. And now you’re stuck trying to communicate to me the realisation that life can be really good—you think it’s causing me to make mistakes regarding all sorts of things I discuss, from population ethics to the moral value of sleep.
You know that you don’t know everything there is to know about the good life, but you’re pretty sure that whatever other weird realisations are out there, I’m sure not gonna find them doing what I’m doing right now.
I’m basically happy with this analogy as far as it goes, although it doesn’t capture the part where part of the reason it’s hard for me to communicate the thing is cultural blindspots; the dystopia analogy captures this really well and that’s why I like it so much.
Regarding power laws, I am attempting to make a strong claim about how reality works—reality, not methodology. It is the case that the things we value are power law distributed—whether it’s health interventions, successful startup ideas, or altruistic causes, it turns out that selecting the right one is where most of the variance is.
As a result, one’s ability to do good will indeed be very noisy—this is why many good funders take hits-based approaches. For example, Peter Thiel is known for asking interesting people for their three weirdest ideas, and funding at least one of them. He funded MIRI early, though I expect he probably was quite unsure of it at the time, so I consider him to have picked up some strong hits.
That’s also my feeling wrt the EA post on SSC. I’m generally not happy with the low variance approaches taken within EA and feel sad at how few new ideas are being tested, but I think that to say the number of orgs doing weird things wrt figuring out what matters, is to push the number in the wrong direction.
Sure, I’m basically happy with this modulo taking “power law” with the appropriate grains of salt (e.g. replacing with log normal or some other heavy tailed distribution as appropriate).
If I take your point very generally, I feel like you’re re-stating Eliezer’s Archimedes Chronophone in a cool, different way. It’s saying “To discover new value, you don’t just need to the same thing but rigorously—you need to be able to get outside your incentives and patterns of thought and have ideas that cause you to pull the rope sideways.”
If I take your point very literally, it seems false. For example I think that trying to practically understand basic economics / game theory very rigorously gets you really far (cf. Eliezer’s post “Moloch’s Toolbox”, Bryan Caplan’s work on education and voter theory, etc). Trying to do statistics on the QALY output of differing health interventions in the developing world gets you important insights (cf. Toby Ord’s “On the moral imperative toward cost-effectiveness”).
(This comment is responding to your points about theory, not about the specific cognitive models being used within EA, which is a separate story.)
This didn’t seem like it really addressed the point Qiaochu was making.
My guess is Qiaochu agrees that “economics and game theory can get you really far.” But, would it specifically lead to EAs in the described dystopia discovering the value of sex and romance, and valuing them appropriately? I’m not sure if you’re claiming it would, or that claiming that it doesn’t matter especially, or something else.
I’m not sure you can pass my ITT; can you try doing that first?
Why do you think QALYs matter even approximately?
Brief response to the second question, will see if I can come back and attempt an ITT later.
QALYs don’t matter much (I think), but learning that they’re power-law distributed matters a lot. It’s a datapoint for the general point, akin to Paul Graham’s essay on black-swan farming—it tells you that value is power law distributed and that figuring out what’s the most valuable thing is super-duper important, relative to how you might otherwise have been modelling the world.
Okay, let me rephrase my ITT request; what I meant to say is that I’m confused, because your chronophone analogy is pretty close to the point I’m trying to make, but I don’t understand what you have in mind for my point taken “very literally” because I don’t understand how your second paragraph is a response to what I wrote.
I’m pretty wary of taking this power-law methodology seriously. If you think importance is power-law distributed and your estimates of what’s important are multiplicatively noisy, then the top of your list of things sorted by importance will mostly be dominated by noise. This is basically my reaction to the SSC post on EAG 2017; I think basically every weird position he describes in that post is a philosophical mistake (e.g. the wild animal suffering people seem to be using a form of total utilitarianism, which I reject completely).
First off, I’d like to apologise, I’ve only now read the OP, and I was talking past you. I still think we have some genuine disagreements however, so I’ll try to clarify those.
I was surprised by how much I liked the post. It separates the following approaches for improving the state of the world:
Further our understanding of what matters
Improve governance
Improve prediction-making & foresight
Reduce existential risk
Increase the number of well-intentioned, highly capable people
My ordering of tractability on these is roughly as follows: 5 > 4 > 3 > 2 > 1. There is then a question of importance and neglectedness. I basically think they’re all fairly neglected, and I don’t have strong opinions on scope except that, probably, number 4 is slightly higher than the rest.
Understanding what matters seems really hard. For the other problems (2-5) I can see strong feedback loops based in math for people to learn (e.g. in governance, people can study microeconomic models, play with them, make predictions and learn). I don’t see this for problem 1.
Sure, for all 1-5 there will be steps where you have to step sideways—notice a key variable that everyone has been avoiding even thinking about, due to various incentives on what thoughts you can think—but there’s more scope for practice, and for a lot of good work to be done that isn’t 80% deep philosophical insight / deep rationality abilities.
Luke Muehlhauser tried really hard and found the tractability surprisingly low (1, 2).
There are worlds consistent with what I’ve said above, where I would nonetheless want to devote significant resources to 1, if (say) we were making plans on a 100-200 year horizon. However, I believe that we’re in a world where we’re very soon going to have to score perfectly on number 4, and furthermore that scoring perfectly on number 4 will cause the other problems to get much easier—including problem number 1.
Summary: As I hadn’t read the OP, I read your comment as claiming your approach to 1 was the only way to do good altruistic work. I then responded with reasons to think that other—more technical approaches—were just as good (especially for the other problems). I now pivot my response to reasons to think that working on things that aren’t 1 are more important, which I think we may disagree on.
I think I basically agree with all of this, except that maybe I think 1 is somewhat more tractable than you do. What I wrote was mostly a response to the OP’s listing of organizations working on 1, and my sense that the OP thought that these organizations were / are making positive progress, which is far from clear to me.
I can’t tell if it’s still necessary, but I wanted to try anyway. Here’s my ITT for your position. This story isn’t literally what I think you think, but I believe it’s an accurate analogy.
---
You have recently done something new, like tried LSD, or gotten really addicted to the high of exercise, or just moved to a new place and been genuinley happy and excited every minute for a whole week.
And you learned something—you didn’t know life could feel this good. It’s a genuine insight and feels important to you, and does change how you will make life plans forevermore.
You talk to me. I talk about the importance of figuring out what really matters in life; I’ve just spent many weeks reading textbooks on the subject and philosophical treatise on experience. And now you’re stuck trying to communicate to me the realisation that life can be really good—you think it’s causing me to make mistakes regarding all sorts of things I discuss, from population ethics to the moral value of sleep.
You know that you don’t know everything there is to know about the good life, but you’re pretty sure that whatever other weird realisations are out there, I’m sure not gonna find them doing what I’m doing right now.
I’m basically happy with this analogy as far as it goes, although it doesn’t capture the part where part of the reason it’s hard for me to communicate the thing is cultural blindspots; the dystopia analogy captures this really well and that’s why I like it so much.
Regarding power laws, I am attempting to make a strong claim about how reality works—reality, not methodology. It is the case that the things we value are power law distributed—whether it’s health interventions, successful startup ideas, or altruistic causes, it turns out that selecting the right one is where most of the variance is.
As a result, one’s ability to do good will indeed be very noisy—this is why many good funders take hits-based approaches. For example, Peter Thiel is known for asking interesting people for their three weirdest ideas, and funding at least one of them. He funded MIRI early, though I expect he probably was quite unsure of it at the time, so I consider him to have picked up some strong hits.
That’s also my feeling wrt the EA post on SSC. I’m generally not happy with the low variance approaches taken within EA and feel sad at how few new ideas are being tested, but I think that to say the number of orgs doing weird things wrt figuring out what matters, is to push the number in the wrong direction.
Sure, I’m basically happy with this modulo taking “power law” with the appropriate grains of salt (e.g. replacing with log normal or some other heavy tailed distribution as appropriate).