Is the suggestion that one’s utilitarian efforts should be primarily focused on the possibility of lab universes an example of “explicit reasoning gone nuts?”
I think so, for side reasons I go into in another comment reply: basically, in a situation with a ton of uncertainty and some evidence for the existence of a class of currently unknown but potentially extremely important things, one should “go meta” and put effort/resources into finding out how to track down such things, reason about such things, and reason about the known unknowns and unknown knowns of the class of such things. There are a variety of ways to go meta here that it would be absurd to fail to seriously consider.
I feel like there’s a way to LCPW your question, but I can’t off the top of my head think of an analogue satisfactorily close to the heart of your question that still seems meaningfully connected to the real world in the right ways. Anyone have suggestions, even if they’re probably bad?
If so, is the argument advanced by Person 1 above also an example of “explicit reasoning gone nuts?” If the two cases are different then why?
My default expectation is for models like the one presented to be off by many orders of magnitude even when handed to me in well-written posts by people whose thinking I esteem in a context where perceived irrationality is received very harshly. If someone was speaking passable Bayesian at me and said there was a non-negligible chance that some strategy was significantly better than my current one, then I would Google the relevant aspects and find out for myself. Arguments like those advanced by Person 1 are indicators of things to look at. If the model uncertainty is so great, then put effort into lessening your model uncertainty.
Sometimes this is of course very hard for one to do given a few hours and an internet access, and the lab universe is a good example of this. (In the lab universe case, some minutes should perhaps be spent on politely asking various cosmologists how they’d bet on the creation of a lab universe occurring at some future points, perhaps even bringing up AGI in this context.) But if so then surely there are other more effective and more psychologically realistic courses of action than just treating your model uncertainty as a given and then optimizing from ignorance.
I’d like to LCPW question three as well but I remain rather unconvinced that we have to deal with model uncertainty in this way. Even if there aren’t easy ways to better one’s map or some group’s map, which I doubt, I would sooner read many textbooks on cognitive biases, probability theory, maybe complex systems, and anything I could find about the risk/strategy in question, than engage in the policy of acting with such little reason, especially when there are personal/psychological/social costs.
I think so, for side reasons I go into in another comment reply: basically, in a situation with a ton of uncertainty and some evidence for the existence of a class of currently unknown but potentially extremely important things, one should “go meta” and put effort/resources into finding out how to track down such things, reason about such things, and reason about the known unknowns and unknown knowns of the class of such things. There are a variety of ways to go meta here that it would be absurd to fail to seriously consider.
I feel like there’s a way to LCPW your question, but I can’t off the top of my head think of an analogue satisfactorily close to the heart of your question that still seems meaningfully connected to the real world in the right ways. Anyone have suggestions, even if they’re probably bad?
My default expectation is for models like the one presented to be off by many orders of magnitude even when handed to me in well-written posts by people whose thinking I esteem in a context where perceived irrationality is received very harshly. If someone was speaking passable Bayesian at me and said there was a non-negligible chance that some strategy was significantly better than my current one, then I would Google the relevant aspects and find out for myself. Arguments like those advanced by Person 1 are indicators of things to look at. If the model uncertainty is so great, then put effort into lessening your model uncertainty.
Sometimes this is of course very hard for one to do given a few hours and an internet access, and the lab universe is a good example of this. (In the lab universe case, some minutes should perhaps be spent on politely asking various cosmologists how they’d bet on the creation of a lab universe occurring at some future points, perhaps even bringing up AGI in this context.) But if so then surely there are other more effective and more psychologically realistic courses of action than just treating your model uncertainty as a given and then optimizing from ignorance.
I’d like to LCPW question three as well but I remain rather unconvinced that we have to deal with model uncertainty in this way. Even if there aren’t easy ways to better one’s map or some group’s map, which I doubt, I would sooner read many textbooks on cognitive biases, probability theory, maybe complex systems, and anything I could find about the risk/strategy in question, than engage in the policy of acting with such little reason, especially when there are personal/psychological/social costs.