The prior probability of us being in a position to impact a googolplex people is on the order of one over googolplex, so your equations must be wrong
That’s not at all how validity of physical theories is evaluated. Not even a little bit.
By that logic, you would have to reject most current theories. For example, Relativity restricted the maximum speed of travel, thus revealing that countless future generations will not be able to reach the stars. Archimedes’s discovery of the buoyancy laws enabled future naval battles and ocean faring, impacting billions so far (which is not a googolplex, but the day is still young). The discovery of fission and fusion still has the potential to destroy all those potential future lives. Same with computer research.
The only thing that matters in physics is the old mundane “fits current data, makes valid predictions”. Or at least has the potential to make testable predictions some time down the road. The only time you might want to bleed (mis)anthropic considerations into physics is when you have no way of evaluating the predictive power of various models and need to decide which one is worth pursuing. But that is not physics, it’s decision theory.
Once you have a testable working theory, your anthropic considerations are irrelevant for evaluating its validity.
That’s perfectly credible since it implies a lack of leverage.
Oh, I assumed that negative leverage is still leverage. Given that it might amount to an equivalent of killing a googolplex of people, assuming you equate never being born with killing.
To build an AI one must be a tad more formal than this, and once you start trying to be formal, you will soon find that you need a prior.
I see. I cannot comment on anything AI-related with any confidence. I thought we were talking about evaluating the likelihood of a certain model in physics to be accurate. In that latter case anthropic considerations seem irrelevant.
It’s likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn’t uniquely capable of having that leverage.
so you have to offset Archimedes leverage penalty by the fact that he wasn’t uniquely capable of having that leverage.
Neither was any other scientist in history ever, including the the one in the Eliezer’s dark energy example. Personally, I take a very dim view of applying anthropics to calculating probabilities of future events, and this is what Eliezer is doing.
That’s not at all how validity of physical theories is evaluated. Not even a little bit.
By that logic, you would have to reject most current theories. For example, Relativity restricted the maximum speed of travel, thus revealing that countless future generations will not be able to reach the stars. Archimedes’s discovery of the buoyancy laws enabled future naval battles and ocean faring, impacting billions so far (which is not a googolplex, but the day is still young). The discovery of fission and fusion still has the potential to destroy all those potential future lives. Same with computer research.
The only thing that matters in physics is the old mundane “fits current data, makes valid predictions”. Or at least has the potential to make testable predictions some time down the road. The only time you might want to bleed (mis)anthropic considerations into physics is when you have no way of evaluating the predictive power of various models and need to decide which one is worth pursuing. But that is not physics, it’s decision theory.
Once you have a testable working theory, your anthropic considerations are irrelevant for evaluating its validity.
That’s perfectly credible since it implies a lack of leverage.
10^10 is not a significant factor compared to the sensory experience of seeing something float in a bathtub.
To build an AI one must be a tad more formal than this, and once you start trying to be formal, you will soon find that you need a prior.
Oh, I assumed that negative leverage is still leverage. Given that it might amount to an equivalent of killing a googolplex of people, assuming you equate never being born with killing.
I see. I cannot comment on anything AI-related with any confidence. I thought we were talking about evaluating the likelihood of a certain model in physics to be accurate. In that latter case anthropic considerations seem irrelevant.
It’s likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn’t uniquely capable of having that leverage.
Neither was any other scientist in history ever, including the the one in the Eliezer’s dark energy example. Personally, I take a very dim view of applying anthropics to calculating probabilities of future events, and this is what Eliezer is doing.