You gave the example that throwing plastic into a forest could be worse than throwing a stick into the forest. Do you have any other clear examples where this heuristic applies? (Where the example itself is not speculative.)
In practice, I think this heuristic tells us that we should be very slow to adopt medical treatments, that we should reject GMOs, reject nuclear energy, and stop building new infrastructure on unsettled land. Do you think these recommendations should be followed? If not, presumably you think we have good reasons not to trust the heuristic in these cases; but if that’s the case, then it seems like the heuristic can be relatively easily overturned. I’m not convinced it should be our guiding consideration for something like AI alignment.
Thanks for the feedback! The most useful arena I can think for this heuristic would be in how well your life aligns with the life your mind and body was adapted to live. There are a lot of places, probably, where making it align better will advantage you.
You gave the example that throwing plastic into a forest could be worse than throwing a stick into the forest. Do you have any other clear examples where this heuristic applies? (Where the example itself is not speculative.)
In practice, I think this heuristic tells us that we should be very slow to adopt medical treatments, that we should reject GMOs, reject nuclear energy, and stop building new infrastructure on unsettled land. Do you think these recommendations should be followed? If not, presumably you think we have good reasons not to trust the heuristic in these cases; but if that’s the case, then it seems like the heuristic can be relatively easily overturned. I’m not convinced it should be our guiding consideration for something like AI alignment.
Thanks for the feedback! The most useful arena I can think for this heuristic would be in how well your life aligns with the life your mind and body was adapted to live. There are a lot of places, probably, where making it align better will advantage you.