I guess with measure-based utilitarianism, it’s more about density of potentially valuable things within the universe than size. If our universe only supports 10^120 available operations, most of it (>99%) is going to be devoid of value under many ethically plausible ways of distributing caring-measure over the space-time regions within a universe.
I agree, but if you have a broad distribution over mixtures then you’ll be including many that don’t use literal locations and those will dominate for “sparse” universes.
I can see easily how you’d get a modest factor favoring other universes over astronomical waste in this universe, but as your measure/uncertainty gets broader (or you have a broader distribution over trading partners) the ratio seems to shrink towards 1 and I don’t feel like “orders of magnitude” is that plausible.
Some people seem to think there’s a good chance that our current level of philosophical understanding is enough to capture most of the value in this universe. (For example, if we implement a universe-wide simulation designed according to Eliezer’s Fun Theory, or if we just wipe out all suffering.) Others may think that we don’t currently have enough understanding to do that, but we can reach that level of understanding “by default”. My argument here is that both of these seem less likely if the goal is instead to capture value from larger/richer universes, and that gives more impetus to trying to improve our philosophical competence.
I agree this is a further argument for needing more philosophical competence. I personally feel like that position is already pretty solid but I acknowledge that it’s not a universal position even amongst EAs.
They’re not supposed to be related except in so far as they’re both arguments for wanting AI to be able to help humans correct their philosophical mistakes instead of just deferring to humans.
“Defer to humans” could mean many different things. This is an argument against AI forever deferring to humans in their current form / with their current knowledge. When I talk about “defer to humans” I’m usually talking about an AI deferring to humans who are explicitly allowed to deliberate/learn/self-modify if that’s what they choose to do (or, perhaps more importantly, to construct a new AI with greater philosophical competence and put it in charge).
I understand that some people might advocate for a stronger form of “defer to humans” and it’s fine to respond to them, but wanted to make sure there wasn’t a misunderstanding. (Also I don’t feel there are very many advocates for the stronger form, I think the bulk of the AI community imagines our AI deferring to us but us being free to design better AIs later.)
I agree, but if you have a broad distribution over mixtures then you’ll be including many that don’t use literal locations and those will dominate for “sparse” universes.
I currently think that each way of distributing caring-measure over a universe should be a separate member of moral parliament, given a weight equal to its ethical plausibility, instead of having just one member with some sort of universal distribution. So there ought to be a substantial coalition in one’s moral parliament that think controlling bigger/richer universes is potentially orders of magnitude more valuable.
Another intuition pump here is to consider a thought experiment where you think there’s 50⁄50 chance that our universe supports either 10^120 operations or 10^(10^120) operations (and controlling other universes isn’t possible). Isn’t there some large coalition of total utilitarians in your moral parliament who would be at least 100x happier to find out that the universe supports 10^(10^120) operations (and be willing to bet/trade accordingly)?
When I talk about “defer to humans” I’m usually talking about an AI deferring to humans who are explicitly allowed to deliberate/learn/self-modify if that’s what they choose to do (or, perhaps more importantly, to construct a new AI with greater philosophical competence and put it in charge).
Yeah I didn’t make this clear, but my worry here is that most humans won’t choose to “deliberate/learn/self-modify” in a way that leads to philosophical maturity (or construct a new AI with greater philosophical competence and put it in charge), if you initially give them an AI that has great intellectual abilities in most areas but defers to humans on philosophical matters. One possibility is that because humans don’t have value functions that are robust against distributional shifts, they’ll (with the help of their AIs) end up doing an adversarial attack against their own value functions and not be able to recover from that. If they somehow avoid that, they may still get stuck at some level of philosophical competence that is less than what’s needed to capture value from bigger/richer universes, and never feel a need to put a new philosophically competent AI in charge. It seems to me that the best way to avoid both of these outcomes (as well as possible near-term moral catastrophes such as creating a lot of suffering that can’t be balanced out later) is to make sure that the first advanced AIs are highly or scalably competent in philosophy. (I understand you probably disagree with “getting stuck” even with regard to capturing value from bigger/richer universes, you’re not very concerned about near term moral catastrophes, and I’m not sure what your thinking on the unrecoverable self-attack thing is.)
Another intuition pump here is to consider a thought experiment where you think there’s 50⁄50 chance that our universe supports either 10^120 operations or 10^(10^120) operations (and controlling other universes isn’t possible). Isn’t there some large coalition of total utilitarians in your moral parliament who would be at least 100x happier to find out that the universe supports 10^(10^120) operations (and be willing to bet/trade accordingly)?
I totally agree that there are members of the parliament who would assign much higher value on other universes than on our universe.
I’m saying that there is also a significant contingent that cares about our universe, so the people who care about other universes aren’t going to dominate.
(And on top of that, all of the contingents are roughly just trying to maximize the “market value” of what we get, so for the most part we need to reason about an even more spread out distribution.)
Yeah I didn’t make this clear, but my worry here is that most humans won’t choose to “deliberate/learn/self-modify” in a way that leads to philosophical maturity (or construct a new AI with greater philosophical competence and put it in charge), if you initially give them an AI that has great intellectual abilities in most areas but defers to humans on philosophical matters.
There are tons of ways you could get people to do something they won’t choose to do. I don’t know if “give them an AI that doesn’t defer to them about philosophy” is more natural than e.g. “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify.”
I’m saying that there is also a significant contingent that cares about our universe, so the people who care about other universes aren’t going to dominate.
I don’t think I’m getting your point here. Personally it seems safe to say that >80% of the contingent of my moral parliament that cares about astronomical waste would say that if our universe was capable of 10^(10^120) operations it would be at least 100x as valuable as if was capable of only 10^120 operations. Are your numbers different from this? In any case, what implications are you suggesting based on “no domination”?
(And on top of that, all of the contingents are roughly just trying to maximize the “market value” of what we get, so for the most part we need to reason about an even more spread out distribution.)
I don’t understand this part at all. Please elaborate?
There are tons of ways you could get people to do something they won’t choose to do.
I did preface my conclusion with “The best opportunity to do this that I can foresee”, so if you have other ideas about what someone like me ought to do, I’d certainly welcome them.
I don’t know if “give them an AI that doesn’t defer to them about philosophy” is more natural than e.g. “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify.”
Isn’t “how they should deliberate/learn/self-modify” itself a difficult philosophical problem (in the field of meta-philosophy)? If it’s somehow easier or safer to “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify” than to “give them an AI that doesn’t defer to them about philosophy” then I’m all for that but it doesn’t seem like a very different idea from mine.
I don’t think I’m getting your point here. Personally it seems safe to say that >80% of the contingent of my moral parliament that cares about astronomical waste would say that if our universe was capable of 10^(10^120) operations it would be at least 100x as valuable as if was capable of only 10^120 operations. Are your numbers different from this? In any case, what implications are you suggesting based on “no domination”?
I might have given 50% or 60% instead of >80%.
I don’t understand how you would get significant conclusions out of this without big multipliers. Yes, there are some participants in your parliament who care more about worlds other than this one. Those worlds appear to be significantly harder to influence (by means other than trade), so this doesn’t seem to have a huge effect on what you ought to do in this world. (Assuming that we are able to make trades that we obviously would have wanted to make behind the veil of ignorance.)
In particular, if your ratio between the value of big and small universes was only 5x, then that would only have a 5x multiplier on the value of the interventions you list in the OP. Given that many of them look very tiny, I assumed you were imagining a much larger multiplier. (Something that looks very tiny may end up being a huge deal, but once we are already wrong by many orders of magnitude it doesn’t feel like the last 5x has a huge impact.)
I don’t understand this part at all. Please elaborate?
We will have control over astronomical resources in our universe. We can then acausally trade that away for influence over the kinds of universes we care about influencing. At equilibrium, ignoring market failures and friction, how much you value getting control over astronomical resources doesn’t depend on which kinds of astronomical resources you in particular terminally value. Everyone instrumentally uses the same utility function, given by the market-clearing prices of different kinds of astronomical resources. In particular, the optimal ratio between (say) hedonism and taking-over-the-universe depends on the market price of the universe you live in, not on how much you in particular value the universe you live in. This is exactly analogous to saying: the optimal tradeoff between work and leisure depends only the market price of the output of your work (ignoring friction and market failures), not on how much you in particular value the output of your work.
So the upshot is that instead of using your moral parliament to set prices, you want to be using a broader distribution over all of the people who control astronomical resources (weighted by the market prices of their resources). Our preferences are still evidence about what others want, but this just tends to make the distribution more spread out (and therefore cuts against e.g. caring much less about colonizing small universes).
Isn’t “how they should deliberate/learn/self-modify” itself a difficult philosophical problem (in the field of meta-philosophy)? If it’s somehow easier or safer to “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify” than to “give them an AI that doesn’t defer to them about philosophy” then I’m all for that but it doesn’t seem like a very different idea from mine.r
I still don’t really get your position, and especially why you think:
It seems to me that the best way to avoid both of these outcomes [...] is to make sure that the first advanced AIs are highly or scalably competent in philosophy.
I do understand why you think it’s an important way to avoid philosophical errors in the short-term, in that case I just don’t see why you think that such problems are important relative to other factors that affect the quality of the future.
This seems to come up a lot in our discussions. It would be useful if you could make a clear statement of why you think this problem (which I understand as: “ensure early AI is highly philosophically competent” or perhaps “differential philosophical progress,” setting aside the application of philosophical competence to what-I’m-calling-alignment) is important, ideally with some kind of quantitative picture of how important you think it is. If you expect to write that up at some point then I’ll just pause until then.
I don’t understand how you would get significant conclusions out of this without big multipliers. Yes, there are some participants in your parliament who care more about worlds other than this one. Those worlds appear to be significantly harder to influence (by means other than trade), so this doesn’t seem to have a huge effect on what you ought to do in this world. (Assuming that we are able to make trades that we obviously would have wanted to make behind the veil of ignorance.)
Wait, you are assuming a baseline/default outcome where acausal trade takes place, and comparing other interventions to that? My baseline for comparison is instead (as stated in the OP) “what can be gained by just creating worthwhile lives in this universe”. My reasons for this are (1) I (and likely others who might read this) don’t think acausal trade is much more likely to work than the other items on my list and (2) the main intended audience for this post is people who have realized the importance of influencing the far future but not aware of (or have seriously considered) the possibility of influencing other universes through things like acausal trade and other items on my list. Even the most sophisticated thinkers in EA seem to fall into this category, e.g., people like Will MacAskill, Toby Ord, and Nick Beckstead, unless they’ve privately considered the possibility and chose not to talk about it in public, in which case it still seems safe to assume that most people in EA think “creating worthwhile lives in this universe” is the most good that can be accomplished.
In particular, if your ratio between the value of big and small universes was only 5x, then that would only have a 5x multiplier on the value of the interventions you list in the OP. Given that many of them look very tiny, I assumed you were imagining a much larger multiplier. (Something that looks very tiny may end up being a huge deal, but once we are already wrong by many orders of magnitude it doesn’t feel like the last 5x has a huge impact.)
I don’t understand where “5x” comes from or why that’s the relevant multiplier instead of 100x.
It would be useful if you could make a clear statement of why you think this problem is important
I’ll think about this, but I think I’d be more motivated to attempt this (and maybe also have a better idea of what I need to do) if other people also spoke up and told me that they couldn’t understand my past attempts to explain this (including what I wrote in the OP and previous comments in this thread).
I agree, but if you have a broad distribution over mixtures then you’ll be including many that don’t use literal locations and those will dominate for “sparse” universes.
I can see easily how you’d get a modest factor favoring other universes over astronomical waste in this universe, but as your measure/uncertainty gets broader (or you have a broader distribution over trading partners) the ratio seems to shrink towards 1 and I don’t feel like “orders of magnitude” is that plausible.
I agree this is a further argument for needing more philosophical competence. I personally feel like that position is already pretty solid but I acknowledge that it’s not a universal position even amongst EAs.
“Defer to humans” could mean many different things. This is an argument against AI forever deferring to humans in their current form / with their current knowledge. When I talk about “defer to humans” I’m usually talking about an AI deferring to humans who are explicitly allowed to deliberate/learn/self-modify if that’s what they choose to do (or, perhaps more importantly, to construct a new AI with greater philosophical competence and put it in charge).
I understand that some people might advocate for a stronger form of “defer to humans” and it’s fine to respond to them, but wanted to make sure there wasn’t a misunderstanding. (Also I don’t feel there are very many advocates for the stronger form, I think the bulk of the AI community imagines our AI deferring to us but us being free to design better AIs later.)
I currently think that each way of distributing caring-measure over a universe should be a separate member of moral parliament, given a weight equal to its ethical plausibility, instead of having just one member with some sort of universal distribution. So there ought to be a substantial coalition in one’s moral parliament that think controlling bigger/richer universes is potentially orders of magnitude more valuable.
Another intuition pump here is to consider a thought experiment where you think there’s 50⁄50 chance that our universe supports either 10^120 operations or 10^(10^120) operations (and controlling other universes isn’t possible). Isn’t there some large coalition of total utilitarians in your moral parliament who would be at least 100x happier to find out that the universe supports 10^(10^120) operations (and be willing to bet/trade accordingly)?
Yeah I didn’t make this clear, but my worry here is that most humans won’t choose to “deliberate/learn/self-modify” in a way that leads to philosophical maturity (or construct a new AI with greater philosophical competence and put it in charge), if you initially give them an AI that has great intellectual abilities in most areas but defers to humans on philosophical matters. One possibility is that because humans don’t have value functions that are robust against distributional shifts, they’ll (with the help of their AIs) end up doing an adversarial attack against their own value functions and not be able to recover from that. If they somehow avoid that, they may still get stuck at some level of philosophical competence that is less than what’s needed to capture value from bigger/richer universes, and never feel a need to put a new philosophically competent AI in charge. It seems to me that the best way to avoid both of these outcomes (as well as possible near-term moral catastrophes such as creating a lot of suffering that can’t be balanced out later) is to make sure that the first advanced AIs are highly or scalably competent in philosophy. (I understand you probably disagree with “getting stuck” even with regard to capturing value from bigger/richer universes, you’re not very concerned about near term moral catastrophes, and I’m not sure what your thinking on the unrecoverable self-attack thing is.)
I totally agree that there are members of the parliament who would assign much higher value on other universes than on our universe.
I’m saying that there is also a significant contingent that cares about our universe, so the people who care about other universes aren’t going to dominate.
(And on top of that, all of the contingents are roughly just trying to maximize the “market value” of what we get, so for the most part we need to reason about an even more spread out distribution.)
There are tons of ways you could get people to do something they won’t choose to do. I don’t know if “give them an AI that doesn’t defer to them about philosophy” is more natural than e.g. “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify.”
I don’t think I’m getting your point here. Personally it seems safe to say that >80% of the contingent of my moral parliament that cares about astronomical waste would say that if our universe was capable of 10^(10^120) operations it would be at least 100x as valuable as if was capable of only 10^120 operations. Are your numbers different from this? In any case, what implications are you suggesting based on “no domination”?
I don’t understand this part at all. Please elaborate?
I did preface my conclusion with “The best opportunity to do this that I can foresee”, so if you have other ideas about what someone like me ought to do, I’d certainly welcome them.
Isn’t “how they should deliberate/learn/self-modify” itself a difficult philosophical problem (in the field of meta-philosophy)? If it’s somehow easier or safer to “give them an AI that doesn’t defer to them about how they should deliberate/learn/self-modify” than to “give them an AI that doesn’t defer to them about philosophy” then I’m all for that but it doesn’t seem like a very different idea from mine.
I might have given 50% or 60% instead of >80%.
I don’t understand how you would get significant conclusions out of this without big multipliers. Yes, there are some participants in your parliament who care more about worlds other than this one. Those worlds appear to be significantly harder to influence (by means other than trade), so this doesn’t seem to have a huge effect on what you ought to do in this world. (Assuming that we are able to make trades that we obviously would have wanted to make behind the veil of ignorance.)
In particular, if your ratio between the value of big and small universes was only 5x, then that would only have a 5x multiplier on the value of the interventions you list in the OP. Given that many of them look very tiny, I assumed you were imagining a much larger multiplier. (Something that looks very tiny may end up being a huge deal, but once we are already wrong by many orders of magnitude it doesn’t feel like the last 5x has a huge impact.)
We will have control over astronomical resources in our universe. We can then acausally trade that away for influence over the kinds of universes we care about influencing. At equilibrium, ignoring market failures and friction, how much you value getting control over astronomical resources doesn’t depend on which kinds of astronomical resources you in particular terminally value. Everyone instrumentally uses the same utility function, given by the market-clearing prices of different kinds of astronomical resources. In particular, the optimal ratio between (say) hedonism and taking-over-the-universe depends on the market price of the universe you live in, not on how much you in particular value the universe you live in. This is exactly analogous to saying: the optimal tradeoff between work and leisure depends only the market price of the output of your work (ignoring friction and market failures), not on how much you in particular value the output of your work.
So the upshot is that instead of using your moral parliament to set prices, you want to be using a broader distribution over all of the people who control astronomical resources (weighted by the market prices of their resources). Our preferences are still evidence about what others want, but this just tends to make the distribution more spread out (and therefore cuts against e.g. caring much less about colonizing small universes).
I still don’t really get your position, and especially why you think:
I do understand why you think it’s an important way to avoid philosophical errors in the short-term, in that case I just don’t see why you think that such problems are important relative to other factors that affect the quality of the future.
This seems to come up a lot in our discussions. It would be useful if you could make a clear statement of why you think this problem (which I understand as: “ensure early AI is highly philosophically competent” or perhaps “differential philosophical progress,” setting aside the application of philosophical competence to what-I’m-calling-alignment) is important, ideally with some kind of quantitative picture of how important you think it is. If you expect to write that up at some point then I’ll just pause until then.
Wait, you are assuming a baseline/default outcome where acausal trade takes place, and comparing other interventions to that? My baseline for comparison is instead (as stated in the OP) “what can be gained by just creating worthwhile lives in this universe”. My reasons for this are (1) I (and likely others who might read this) don’t think acausal trade is much more likely to work than the other items on my list and (2) the main intended audience for this post is people who have realized the importance of influencing the far future but not aware of (or have seriously considered) the possibility of influencing other universes through things like acausal trade and other items on my list. Even the most sophisticated thinkers in EA seem to fall into this category, e.g., people like Will MacAskill, Toby Ord, and Nick Beckstead, unless they’ve privately considered the possibility and chose not to talk about it in public, in which case it still seems safe to assume that most people in EA think “creating worthwhile lives in this universe” is the most good that can be accomplished.
I don’t understand where “5x” comes from or why that’s the relevant multiplier instead of 100x.
I’ll think about this, but I think I’d be more motivated to attempt this (and maybe also have a better idea of what I need to do) if other people also spoke up and told me that they couldn’t understand my past attempts to explain this (including what I wrote in the OP and previous comments in this thread).