Thanks, I think this is an important area and having an overview of your thinking is useful.
My impression is that it would be more useful still if it were written to make plainer the differing degrees of support available for its different claims. You make a lot of claims, which vary from uncontroversial theorems to common beliefs in the AI safety community to things that seem like they’re probably false (not necessarily for deep reasons, but at least false-as-stated). And the language of support doesn’t seem to be stronger for the first category than the last. If you went further in flagging the distinction between things that are accepted and things that you guess are true, I’d be happier trusting the paper and pointing other people to it.
I’ll give examples, though this is more representative than a claim that you should change these details.
On page 2, you say “In linear programming, the maximum of an objective function tends to occur on a vertex of the space.” Here “tends to” seems unnecessary hedging—I think this is just a theorem! Perhaps there’s an interpretation where it fails, but you hedge far less on other much more controversial things.
On the other hand the very next sentence: “Similarly, the optimal solution to a goal tends to occur on an edge (hyperface) of the possibility space.” appears to have a similar amount of hedging for what is a much weaker sense of “tends”, and what’s a much weaker conclusion (being in a hyperface is much weaker than being at a vertex).
Another example: the top paragraph of the right column of page 3 uses “must” but seems to presuppose an internal representation with utility functions.
Thanks. I’ve re-worded these particular places, and addressed a few other things that pattern-matched on a quick skim. I don’t have time to go back over this paper with a fine comb, but if you find other examples, I’m happy to tweak the wording :-)
Thanks for the quick update! Perhaps this will be most useful when writing new things, as I agree that it may not be worth your time to rewrite carefully (and should have said that).
On page 2, you say “In linear programming, the maximum of an objective function tends to occur on a vertex of the space.” Here “tends to” seems unnecessary hedging—I think this is just a theorem!
It is. If there exists an optimal solution, at least one vertex will be optimal, and as RyanCarey points out, if a hyperface is optimal it will have at least one vertex.
A stronger statement is that the Simplex algorithm will always return an optimal vertex (interior point algorithms will return the center of the hyperface, which is only a vertex if that’s the only optimal point).
“In linear programming, the maximum of an objective function tends to occur on a vertex of the space.” Here “tends to” seems unnecessary hedging—I think this is just a theorem!
… Even if the optimum occurs along an edge, it’ll at least include vertices.
Thanks, I think this is an important area and having an overview of your thinking is useful.
My impression is that it would be more useful still if it were written to make plainer the differing degrees of support available for its different claims. You make a lot of claims, which vary from uncontroversial theorems to common beliefs in the AI safety community to things that seem like they’re probably false (not necessarily for deep reasons, but at least false-as-stated). And the language of support doesn’t seem to be stronger for the first category than the last. If you went further in flagging the distinction between things that are accepted and things that you guess are true, I’d be happier trusting the paper and pointing other people to it.
I’ll give examples, though this is more representative than a claim that you should change these details.
On page 2, you say “In linear programming, the maximum of an objective function tends to occur on a vertex of the space.” Here “tends to” seems unnecessary hedging—I think this is just a theorem! Perhaps there’s an interpretation where it fails, but you hedge far less on other much more controversial things.
On the other hand the very next sentence: “Similarly, the optimal solution to a goal tends to occur on an edge (hyperface) of the possibility space.” appears to have a similar amount of hedging for what is a much weaker sense of “tends”, and what’s a much weaker conclusion (being in a hyperface is much weaker than being at a vertex).
Another example: the top paragraph of the right column of page 3 uses “must” but seems to presuppose an internal representation with utility functions.
Thanks. I’ve re-worded these particular places, and addressed a few other things that pattern-matched on a quick skim. I don’t have time to go back over this paper with a fine comb, but if you find other examples, I’m happy to tweak the wording :-)
Thanks for the quick update! Perhaps this will be most useful when writing new things, as I agree that it may not be worth your time to rewrite carefully (and should have said that).
It is. If there exists an optimal solution, at least one vertex will be optimal, and as RyanCarey points out, if a hyperface is optimal it will have at least one vertex.
A stronger statement is that the Simplex algorithm will always return an optimal vertex (interior point algorithms will return the center of the hyperface, which is only a vertex if that’s the only optimal point).
… Even if the optimum occurs along an edge, it’ll at least include vertices.