There is some middle ground to thinking about it enough to have a real inside view, and thinking about it enough to have a better-than-average opinion. I think that better than average opinion would be something like “there’s a pretty good chance of AI becoming really dangerous in the not too distant future. We’re putting very little effort into making it safe, so it would probably be smarter to spend a lot more effort on that”.
I think that’s what you’d come to after a little research, because that’s where I’m at after a whole bunch of research. The top minds on safety (that is, people who’ve actually thought about it, not just experts in other domains that run their mouths) disagree on a lot, but they almost universally agree on that much.
Edit: my point there is that you might do a modest amount of good after a very small amount of time invested. And I don’t think remaining willfully ignorant is going to make you happier about AI risk. Society at large is increasingly concerned, and we’ll only continue to become more concerned as AI has a larger impact year by year. So you’re going to be stuck in those conversations anyway, with people pressuring you to be concerned. You might as well know something as be completely ignorant, particularly since it sounds like your current loose belief is that the risk might be very, very high.
Yes, my current personal default is just deferring to what mainstream non-EA/rat AI experts seem to be saying, which seems to be trending towards more concerned. I just prefer not to talk about it most of the time. :)
This makes sense, and it’s an unusual conclusion.
There is some middle ground to thinking about it enough to have a real inside view, and thinking about it enough to have a better-than-average opinion. I think that better than average opinion would be something like “there’s a pretty good chance of AI becoming really dangerous in the not too distant future. We’re putting very little effort into making it safe, so it would probably be smarter to spend a lot more effort on that”.
I think that’s what you’d come to after a little research, because that’s where I’m at after a whole bunch of research. The top minds on safety (that is, people who’ve actually thought about it, not just experts in other domains that run their mouths) disagree on a lot, but they almost universally agree on that much.
Edit: my point there is that you might do a modest amount of good after a very small amount of time invested. And I don’t think remaining willfully ignorant is going to make you happier about AI risk. Society at large is increasingly concerned, and we’ll only continue to become more concerned as AI has a larger impact year by year. So you’re going to be stuck in those conversations anyway, with people pressuring you to be concerned. You might as well know something as be completely ignorant, particularly since it sounds like your current loose belief is that the risk might be very, very high.
Yes, my current personal default is just deferring to what mainstream non-EA/rat AI experts seem to be saying, which seems to be trending towards more concerned. I just prefer not to talk about it most of the time. :)