Oh yes, besides ciphergoth, I was thinking in this vein in part because of some work for Luke: ‘value of information’ has obvious implications for anyone who is picking research topics or research funding based on consequentialist reasons.
I think it’s pretty obvious why it’s relevant, but to give an example, some of the topics in Bostrom’s paper have very unclear value of informations, which in our case can be defined as ‘what can we do about this problem?’ For example, he starts with the Doomsday Problem, which is very interesting and all—but suppose we had perfect information which settled the Doomsday Problem, saying that, yes, humanity will intend end by 5000 AD or whenever with 95% probability. What is the value of this information? Well, as far as I can see, it’s close to zero: the Doomsday problem doesn’t specify why humanity ends, just that that class of observers runs out. If we were to get more information, it might be that we become posthumans, which is not something we need to learn.
Or to take a more pointed example: asteroids have a high value of information because once we learn about them, we can send up spacecraft to do something about it. Hence, we ought to be willing to pay an oracle billions in exchange for perfect information about anything aimed at Earth.
A collapse of the vacuum (another classic existential risk) is not worth researching in the slightest bit except as it relates to particle accelerators possibly causing it, because there is absolutely nothing we can do about it.
I’m not sure I’ve seen this point made anywhere in the literature, but it definitely should either be mentioned in any paper on efficient philosophy or redundant/implied by the analysis.
Oh yes, besides ciphergoth, I was thinking in this vein in part because of some work for Luke: ‘value of information’ has obvious implications for anyone who is picking research topics or research funding based on consequentialist reasons.
I think it’s pretty obvious why it’s relevant, but to give an example, some of the topics in Bostrom’s paper have very unclear value of informations, which in our case can be defined as ‘what can we do about this problem?’ For example, he starts with the Doomsday Problem, which is very interesting and all—but suppose we had perfect information which settled the Doomsday Problem, saying that, yes, humanity will intend end by 5000 AD or whenever with 95% probability. What is the value of this information? Well, as far as I can see, it’s close to zero: the Doomsday problem doesn’t specify why humanity ends, just that that class of observers runs out. If we were to get more information, it might be that we become posthumans, which is not something we need to learn.
Or to take a more pointed example: asteroids have a high value of information because once we learn about them, we can send up spacecraft to do something about it. Hence, we ought to be willing to pay an oracle billions in exchange for perfect information about anything aimed at Earth.
A collapse of the vacuum (another classic existential risk) is not worth researching in the slightest bit except as it relates to particle accelerators possibly causing it, because there is absolutely nothing we can do about it.
I’m not sure I’ve seen this point made anywhere in the literature, but it definitely should either be mentioned in any paper on efficient philosophy or redundant/implied by the analysis.