Oh ok lol. Ok on a quick read I didn’t see too much in this comment to disagree with.
(One possible point of disagreement is that I think you plausibly couldn’t gather any set of people alive today and solve the technical problem; plausibly you need many, like many hundreds, of people you call geniuses. Obviously “hundreds” is made up, but I mean to say that the problem, “come to understand minds—the most subtle/complex thing ever—at a pretty deep+comprehensive level”, is IMO extremely difficult, like it’s harder than anything humanity has done so far by a lot, not just an ordinary big science project. Possibly contra Soares, IDK.)
(Another disagreement would be
[Scott] has unarguably done a large amount of the most valuable work in the area in the past decade
I don’t actually think logical induction is that valuable for the AGI alignment problem, to the point where random philosophy is on par in terms of value to alignment, though I expect most people to disagree with this. It’s just a genius technical insight in general.)
I admitted that it’s possible the problem is practically unsolvable, or worse; you could have put the entire world on Russell and Whitehead’s goal of systematizing math, and you might have gotten to Gödel faster, but you’d probably just waste more time.
And on Scott’s contributions, I think they are solving or contributing towards solving parts of the problems that were posited initially as critical to alignment, and I haven’t seen anyone do more. (With the possible exception of Paul Christiano, who hasn’t been focusing on research for solving alignment as much recently.) I agree that the work doesn’t don’t do much other than establish better foundations, but that’s kind-of the point. (And it’s not just Logical induction—there’s his collaboration on Embedded Agency, and his work on finite factored sets.) But the fact that the work done to establish the base for the work is more philosophical and doesn’t align AGI seems like it is moving the goalposts, even if I agree it’s true.
Oh ok lol. Ok on a quick read I didn’t see too much in this comment to disagree with.
(One possible point of disagreement is that I think you plausibly couldn’t gather any set of people alive today and solve the technical problem; plausibly you need many, like many hundreds, of people you call geniuses. Obviously “hundreds” is made up, but I mean to say that the problem, “come to understand minds—the most subtle/complex thing ever—at a pretty deep+comprehensive level”, is IMO extremely difficult, like it’s harder than anything humanity has done so far by a lot, not just an ordinary big science project. Possibly contra Soares, IDK.)
(Another disagreement would be
I don’t actually think logical induction is that valuable for the AGI alignment problem, to the point where random philosophy is on par in terms of value to alignment, though I expect most people to disagree with this. It’s just a genius technical insight in general.)
I admitted that it’s possible the problem is practically unsolvable, or worse; you could have put the entire world on Russell and Whitehead’s goal of systematizing math, and you might have gotten to Gödel faster, but you’d probably just waste more time.
And on Scott’s contributions, I think they are solving or contributing towards solving parts of the problems that were posited initially as critical to alignment, and I haven’t seen anyone do more. (With the possible exception of Paul Christiano, who hasn’t been focusing on research for solving alignment as much recently.) I agree that the work doesn’t don’t do much other than establish better foundations, but that’s kind-of the point. (And it’s not just Logical induction—there’s his collaboration on Embedded Agency, and his work on finite factored sets.) But the fact that the work done to establish the base for the work is more philosophical and doesn’t align AGI seems like it is moving the goalposts, even if I agree it’s true.