On the scientific/technological side, you can also use scientific/engineering papers (which I’m guessing has to be at least an order of magnitude greater in volume than philosophy writing)
This still seems like it is continuing the status quo (where we put more effort into technology relative to philosophy) rather than differentially benefitting technology.
My main point is that it seems a lot harder for technological progress to go “off the rails” due to having access to ground truths (even if that data is sparse) so we can push it much harder with ML.
Yeah, that seems right, to the extent that we want to use ML to “directly” work on technological / philosophical progress. To the extent that it has to factor through some more indirect method (e.g. through human reasoning as in iterated amplification) I think this becomes an argument to be pessimistic about solving metaphilosophy, but not that it will differentially benefit technological progress (or at least this depends on hard-to-agree-on intuitions).
I think there’s a strong argument to be made that you will have to go through some indirect method because there isn’t enough data to attack the problem directly.
(Fwiw, I’m also worried about the semi-supervised RL part of iterated amplification for the same reason.)
The way I would put it is that humans developed philosophical abilities for some mysterious reason that we don’t understand, so we can’t rule out AI developing philosophical abilities for the same reason. It feels pretty risky to rely on this though.
Yeah, I agree that this is a strong argument for your position.
This still seems like it is continuing the status quo (where we put more effort into technology relative to philosophy) rather than differentially benefitting technology.
Yeah, that seems right, to the extent that we want to use ML to “directly” work on technological / philosophical progress. To the extent that it has to factor through some more indirect method (e.g. through human reasoning as in iterated amplification) I think this becomes an argument to be pessimistic about solving metaphilosophy, but not that it will differentially benefit technological progress (or at least this depends on hard-to-agree-on intuitions).
I think there’s a strong argument to be made that you will have to go through some indirect method because there isn’t enough data to attack the problem directly.
(Fwiw, I’m also worried about the semi-supervised RL part of iterated amplification for the same reason.)
Yeah, I agree that this is a strong argument for your position.