(Replying without the context I imagine to be present here)
I agree with a version of this which goes “just knowing how to make SGD go brrr does not at all mean you have expertise for predicting what happens with effective AI.”
I disagree with a version of this comment which means, “Having a lot of ML expertise doesn’t mean you have expertise for thinking about effective AIs.” Eliezer could have started off his train of thought by imagining systems which are not the kind of system which gets trained by SGD. There’s no guarantee that thought experiments nominally about “effective AIs” are at all relevant to real-world effective AIs. (Example specific critique A of claims about minds-in-general, example specific critique B of attempts to use AIXI as a model of effective intelligence.)
(Replying without the context I imagine to be present here)
I agree with a version of this which goes “just knowing how to make SGD go brrr does not at all mean you have expertise for predicting what happens with effective AI.”
I disagree with a version of this comment which means, “Having a lot of ML expertise doesn’t mean you have expertise for thinking about effective AIs.” Eliezer could have started off his train of thought by imagining systems which are not the kind of system which gets trained by SGD. There’s no guarantee that thought experiments nominally about “effective AIs” are at all relevant to real-world effective AIs. (Example specific critique A of claims about minds-in-general, example specific critique B of attempts to use AIXI as a model of effective intelligence.)