FWIW my “one guy’s opinion” is (1) I’m expecting people to build goal-seeking AGIs, and I think by default their goals will be opaque and unstable and full of unpredictable distortions compared to whatever was intended, and solving this problem is necessary for a good future (details), (2) Figuring out how AGIs will be deployed and what they’ll be used for in a complicated competitive human world is also a problem that needs to be solved to get a good future. I don’t think either of these problems is close to being solved, or that they’re likely to be solved “by default”.
(I’m familiar with your argument that companies are incentivized to solve single-single alignment, and therefore it will be solved “by default”, but I remain pessimistic, at least in the development scenario I’m thinking about, again see here.)
So I think (1) and (2) are both very important things that people should be working on right now. However, I think I might have some intelligent things to say about (1), whereas I have nothing intelligent to say about (2). So that’s the main reason I’m working on (1). :-P I do wish you & others luck—and I’ve said that before, see e.g. section 10 here. :-)
FWIW my “one guy’s opinion” is (1) I’m expecting people to build goal-seeking AGIs, and I think by default their goals will be opaque and unstable and full of unpredictable distortions compared to whatever was intended, and solving this problem is necessary for a good future (details), (2) Figuring out how AGIs will be deployed and what they’ll be used for in a complicated competitive human world is also a problem that needs to be solved to get a good future. I don’t think either of these problems is close to being solved, or that they’re likely to be solved “by default”.
(I’m familiar with your argument that companies are incentivized to solve single-single alignment, and therefore it will be solved “by default”, but I remain pessimistic, at least in the development scenario I’m thinking about, again see here.)
So I think (1) and (2) are both very important things that people should be working on right now. However, I think I might have some intelligent things to say about (1), whereas I have nothing intelligent to say about (2). So that’s the main reason I’m working on (1). :-P I do wish you & others luck—and I’ve said that before, see e.g. section 10 here. :-)