considerably-better-than-average work on trying to solve the problem from scratch
It’s considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it’s a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don’t even know what that would look like.
even for someone interested in this agenda
At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth’s perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth’s and related LW-ish writing. (I’m unsure about the mathematically interesting aspects; the alternative wouldn’t be in the ML community but would be in the mathematical community.)
And most importantly “someone interested in this agenda” is already a somewhat nonsensical or question-begging conditional. You brought up “AI safety research” specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven’t presented an affirmative case. That’s because we haven’t even communicated about the proposition yet.)
It’s considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it’s a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don’t even know what that would look like.
At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth’s perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth’s and related LW-ish writing. (I’m unsure about the mathematically interesting aspects; the alternative wouldn’t be in the ML community but would be in the mathematical community.)
And most importantly “someone interested in this agenda” is already a somewhat nonsensical or question-begging conditional. You brought up “AI safety research” specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven’t presented an affirmative case. That’s because we haven’t even communicated about the proposition yet.)