I rank the credibility of my own informed guesses far above those of Eliezer.
Apologies if there is a clear answer to this, since I don’t know your name and you might well be super-famous in the field: Why do you rate yourself “far above” someone who has spent decades working in this field? Appealing to experts like MIRI makes for a strong argument. Appealing to your own guesses instead seems like the sort of thought process that leads to anti-vaxxers.
I think it’s a positive if alignment researchers feel like it’s an allowed option to trust their own technical intuitions over the technical intuitions of this or that more-senior researcher.
Overly dismissing old-guard researchers is obviously a way the field can fail as well. But the field won’t advance much at all if most people don’t at least try to build their own models.
Koen also leans more on deference in his comment than I’d like, so I upvoted your ‘deferential but in the opposite direction’ comment as a corrective, handoflixue. :P But I think it would be a much better comment if it didn’t conflate epistemic authority with “fame” (I don’t think fame is at all a reliable guide to epistemic ability here), and if it didn’t equate “appealing to your own guesses” with “anti-vaxxers”.
Alignment is a young field; “anti-vaxxer” is a term you throw at people after vaccines have existed for 200 years, not a term you throw at the very first skeptical researchers arguing about vaccines in 1800. Even if the skeptics are obviously and decisively wrong at an early date (which indeed not-infrequently happens in science!), it’s not the right way to establish the culture for those first scientific debates.
Why do you rate yourself “far above” someone who has spent decades working in this field?
Well put, valid question. By the way, did you notice how careful I
was in avoiding any direct mention of my own credentials above?
I see that Rob has already written a reply to your comments, making
some of the broader points that I could have made too. So I’ll cover some
other things.
To answer your valid question: If you hover over my LW/AF username, you can
see that I self-code as the kind of alignment researcher who is also a
card-carrying member of the academic/industrial establishment. In
both age and academic credentials. I am in fact a more-senior
researcher than Eliezer is. So the epistemology, if you are outside of
this field and want to decide which one of us is probably more right,
gets rather complicated.
Though we have disagreements, I should also point out some similarities between Eliezer and
me.
Like Eliezer, I spend a lot of time reflecting on the problem of
crafting tools that other people might use to improve their own
ability to think about alignment. Specifically, these are not tools that can be used for the problem of
triangulating between self-declared experts. They are tools that can be used by people to develop
their own well-founded opinions independently. You may have noticed that this is somewhat of a theme in section C of the original post above.
The tools I have crafted so far are somewhat different from those that
Eliezer is most famous for. I also tend to target my tools more at the mainstream than at
Rationalists and EAs reading this forum.
Like Eliezer, on some bad days I cannot escape having certain feelings of disappointment about how well
this entire global tool crafting project has been going so far. Eliezer seems to be having quite a lot of these bad days recently, which makes me feel sorry, but there you go.
Thanks for taking my question seriously—I am still a bit confused why you would have been so careful to avoid mentioning your credentials up front, though, given that they’re fairly relevant to whether I should take your opinion seriously.
Also, neat, I had not realized hovering over a username gave so much information!
Apologies if there is a clear answer to this, since I don’t know your name and you might well be super-famous in the field: Why do you rate yourself “far above” someone who has spent decades working in this field? Appealing to experts like MIRI makes for a strong argument. Appealing to your own guesses instead seems like the sort of thought process that leads to anti-vaxxers.
I think it’s a positive if alignment researchers feel like it’s an allowed option to trust their own technical intuitions over the technical intuitions of this or that more-senior researcher.
Overly dismissing old-guard researchers is obviously a way the field can fail as well. But the field won’t advance much at all if most people don’t at least try to build their own models.
Koen also leans more on deference in his comment than I’d like, so I upvoted your ‘deferential but in the opposite direction’ comment as a corrective, handoflixue. :P But I think it would be a much better comment if it didn’t conflate epistemic authority with “fame” (I don’t think fame is at all a reliable guide to epistemic ability here), and if it didn’t equate “appealing to your own guesses” with “anti-vaxxers”.
Alignment is a young field; “anti-vaxxer” is a term you throw at people after vaccines have existed for 200 years, not a term you throw at the very first skeptical researchers arguing about vaccines in 1800. Even if the skeptics are obviously and decisively wrong at an early date (which indeed not-infrequently happens in science!), it’s not the right way to establish the culture for those first scientific debates.
Well put, valid question. By the way, did you notice how careful I was in avoiding any direct mention of my own credentials above?
I see that Rob has already written a reply to your comments, making some of the broader points that I could have made too. So I’ll cover some other things.
To answer your valid question: If you hover over my LW/AF username, you can see that I self-code as the kind of alignment researcher who is also a card-carrying member of the academic/industrial establishment. In both age and academic credentials. I am in fact a more-senior researcher than Eliezer is. So the epistemology, if you are outside of this field and want to decide which one of us is probably more right, gets rather complicated.
Though we have disagreements, I should also point out some similarities between Eliezer and me.
Like Eliezer, I spend a lot of time reflecting on the problem of crafting tools that other people might use to improve their own ability to think about alignment. Specifically, these are not tools that can be used for the problem of triangulating between self-declared experts. They are tools that can be used by people to develop their own well-founded opinions independently. You may have noticed that this is somewhat of a theme in section C of the original post above.
The tools I have crafted so far are somewhat different from those that Eliezer is most famous for. I also tend to target my tools more at the mainstream than at Rationalists and EAs reading this forum.
Like Eliezer, on some bad days I cannot escape having certain feelings of disappointment about how well this entire global tool crafting project has been going so far. Eliezer seems to be having quite a lot of these bad days recently, which makes me feel sorry, but there you go.
Thanks for taking my question seriously—I am still a bit confused why you would have been so careful to avoid mentioning your credentials up front, though, given that they’re fairly relevant to whether I should take your opinion seriously.
Also, neat, I had not realized hovering over a username gave so much information!
You are welcome. I carefully avoided mentioning my credentials as a rhetorical device.
This is to highlight the essence of how many of the arguments on this site work.