Suppose the AI finds a plan with 10^50 impact and 10^1000 utility. I don’t want that plan to be run. Its probably a plan that involves taking over the universe and then doing something really high utility. I think a constraint is better than a scaling factor.
If theorem 11 is met, we’re fine. There are some good theoretical reasons not to use constraints (beyond the computational ones).
(It’s true that the buffering criterion is nice and simple for constrained partitions (the first nondominated catastrophe has 1+α times the impact of the first non-dominated reasonable plan).)
Suppose the AI finds a plan with 10^50 impact and 10^1000 utility. I don’t want that plan to be run. Its probably a plan that involves taking over the universe and then doing something really high utility. I think a constraint is better than a scaling factor.
Utility is bounded [0,1].
If theorem 11 is met, we’re fine. There are some good theoretical reasons not to use constraints (beyond the computational ones).
(It’s true that the buffering criterion is nice and simple for constrained partitions (the first nondominated catastrophe has 1+α times the impact of the first non-dominated reasonable plan).)