I read this as saying that a utility function which is directly dependent on another’s utility is not a utility function.
No; I meant that each agent has a utility function, and tries to maximize that utility function.
If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.
One thing I didn’t explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you’re more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.
But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!
EDIT: No. Unbiased doesn’t cut it.
On life-and-death conflicts, I did give such an argument, but very briefly:
Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.
No; I meant that each agent has a utility function, and tries to maximize that utility function.
If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.
One thing I didn’t explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you’re more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.
But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!
EDIT: No. Unbiased doesn’t cut it.
On life-and-death conflicts, I did give such an argument, but very briefly:
I realize this isn’t enough for someone who isn’t already familiar with the full argument, but it’s after midnight and I’m going to bed.
The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.