One may not share Dawrst’s intuition that pain would outweigh happiness in such universes, but regardless, the hypothetical of lab universes raises the possibility that all of the philanthropy that one engages in with a view toward utility maximizing should be focusing around creating or preventing the creation of infinitely many lab universes (according to whether or not one one views the expected value of such a universe as positive or negative).
I haven’t even finished reading this post yet, but it’s worth making explicit (because of the obvious connections to existential risks strategies in general) that the philanthropy in this case should arguably go towards research that searches for and identifies things like lab universe scenarios, research into how to search for or research such things (e.g. policies about dealing with basilisks at the individual and group levels), research into how to structure brains such that those brains won’t completely fail at said research or research generally, et cetera ad infinitum. Can someone please start a non-profit dedicated to the research and publication of “going meta”? Please?
ETA: I’m happy to see you talk about similar things with counterargument 3, but perhaps you could fuse an FAI (not necessarily CEV) argument with the considerations I mentioned above, e.g. put all of your money into building a passable oracle AI to help you think about how to be an optimal utilitarian (perhaps given some amount of information about what you think “your” “utility function(s)” might be (or what you think morality is)), or something more meta than that.
Research into bootstrapping current research to ideal research, research into cognitive comparative advantage, research into how to convince people to research such things or support the research of such things, research into what to do given that practically no one can research any of these things and even if they could no one would pay them to...
I haven’t even finished reading this post yet, but it’s worth making explicit (because of the obvious connections to existential risks strategies in general) that the philanthropy in this case should arguably go towards research that searches for and identifies things like lab universe scenarios, research into how to search for or research such things (e.g. policies about dealing with basilisks at the individual and group levels), research into how to structure brains such that those brains won’t completely fail at said research or research generally, et cetera ad infinitum. Can someone please start a non-profit dedicated to the research and publication of “going meta”? Please?
ETA: I’m happy to see you talk about similar things with counterargument 3, but perhaps you could fuse an FAI (not necessarily CEV) argument with the considerations I mentioned above, e.g. put all of your money into building a passable oracle AI to help you think about how to be an optimal utilitarian (perhaps given some amount of information about what you think “your” “utility function(s)” might be (or what you think morality is)), or something more meta than that.
Research into bootstrapping current research to ideal research, research into cognitive comparative advantage, research into how to convince people to research such things or support the research of such things, research into what to do given that practically no one can research any of these things and even if they could no one would pay them to...