Second, it seems wrong because the underlying issue that you’re unhappy with—a constraint or objective that we consider important is left out—is not solved by sampling from the solution space instead of deterministically finding optimal points. There’s no morality from randomness!
“There is no morality from randomness” is not exactly dealing with the point under contest. I am effectively claiming that one should treat selection of social policies as a constraint-satisfaction problem, precisely because treating it as an optimization problem throws out subconscious constraints by default, which makes optimization methods mostly useless when we can’t directly write down precisely the one and only objective function we care about.
I am effectively claiming that one should treat selection of social policies as a constraint-satisfaction problem
So, there’s a class of problems where the hard part is finding a solution that satisfies all the constraints (i.e. a feasible solution). “Is it possible to pack the boxes on this list into a truck following these rules?” It’s generally better to use optimization methods than generic sampling or satisfiability methods, because it can provide near-feasible solutions (“this is the plan that gets the most boxes on the truck”) and can be much faster.
But I don’t think that’s the problem class under discussion, which is some mixture of “what social policies should we support / what should we do with our charitable energy.” If someone says, “I want to reduce the damage done by blindness, please advise,” they’re talking about a maximization problem with many feasible solutions, not a feasibility problem, because it’s easy to come up with a very broad range of things they could do to reduce the damage done by blindness.
The approach you’re recommending seems like it would cash out as “well, I got a list of charities with “blind” in their name, and randomly sampled five from that list. Maybe you should donate to one of them!”
Another way to look at this is ‘absolute constraints’ vs. ‘relative constraints.’ The absolute constraints are the same regardless of what solutions exist (or don’t), the relative constraints are defined only in terms of other solutions. The core insight of EA is that it makes sense to take relative constraints into account when doing charitable donations—it is more good to donation to more effective charities. If we discover that one health charity generates a QALY for a thousand dollars, then we can implicitly add the constraint that all health charities have to generate at least one QALY for every thousand dollars we give them.
I agree that there’s reason to be suspicious of automatically generated relative constraints, but I think that there are better approaches to take to resolving that suspicion than moving to pure sampling.
Sorry for any apparent rudeness.
“There is no morality from randomness” is not exactly dealing with the point under contest. I am effectively claiming that one should treat selection of social policies as a constraint-satisfaction problem, precisely because treating it as an optimization problem throws out subconscious constraints by default, which makes optimization methods mostly useless when we can’t directly write down precisely the one and only objective function we care about.
So, there’s a class of problems where the hard part is finding a solution that satisfies all the constraints (i.e. a feasible solution). “Is it possible to pack the boxes on this list into a truck following these rules?” It’s generally better to use optimization methods than generic sampling or satisfiability methods, because it can provide near-feasible solutions (“this is the plan that gets the most boxes on the truck”) and can be much faster.
But I don’t think that’s the problem class under discussion, which is some mixture of “what social policies should we support / what should we do with our charitable energy.” If someone says, “I want to reduce the damage done by blindness, please advise,” they’re talking about a maximization problem with many feasible solutions, not a feasibility problem, because it’s easy to come up with a very broad range of things they could do to reduce the damage done by blindness.
The approach you’re recommending seems like it would cash out as “well, I got a list of charities with “blind” in their name, and randomly sampled five from that list. Maybe you should donate to one of them!”
Another way to look at this is ‘absolute constraints’ vs. ‘relative constraints.’ The absolute constraints are the same regardless of what solutions exist (or don’t), the relative constraints are defined only in terms of other solutions. The core insight of EA is that it makes sense to take relative constraints into account when doing charitable donations—it is more good to donation to more effective charities. If we discover that one health charity generates a QALY for a thousand dollars, then we can implicitly add the constraint that all health charities have to generate at least one QALY for every thousand dollars we give them.
I agree that there’s reason to be suspicious of automatically generated relative constraints, but I think that there are better approaches to take to resolving that suspicion than moving to pure sampling.