With the advent of Sydney and now this, I’m becoming more inclined to believe that AI Safety and policies related to it are very close to being in the overton window of most intellectuals (I wouldn’t say the general public, yet). Like, maybe within a year, more than 60% of academic researchers will have heard of AI Safety. I don’t feel confident whatsoever about the claim, but it now seems more than ~20% likely. Does this seem to be a reach?
I was watching an interview with that NYT reporter who had the newsworthy Bing chat interaction, and he used some language that made me think he’d searched for people talking about Bing chat and read Evan’s post or a direct derivative of it.
Basically yes, I’d say that AI safety is in fact in the overton window. What I see as the problem is more that a bunch of other stupid stuff is also in the overton window.
One can hope, although I see very little evidence for it.
Most evidence I see, is an educated and very intelligent person, writing about AI (not their field), and when reading it I could easily have been a chemist reading about how the 4 basic elements makes it abundantly clear that bla bla—you get the point.
And I don’t even know how to respond to that, the ontology displayed is to just fundamentally wrong, and tackling that feels like trying to explain differential equations to my 8 year old daughter (to the point where she grooks it).
There is also the problem of engaging such a person, its very easy to end up alienating them and just cementing their thinking.
That doesn’t mean I think it is not worth doing, but its not some casual off the cuff thing.
This is a pretty common problem. If anyone ever needs to explain AI safety to someone, with minimal risk of messing up, I think that giving them pages 137-149 from Toby Ord’s The Precipice is the best approach. It’s simple, one shot, and does everything right.
With the advent of Sydney and now this, I’m becoming more inclined to believe that AI Safety and policies related to it are very close to being in the overton window of most intellectuals (I wouldn’t say the general public, yet). Like, maybe within a year, more than 60% of academic researchers will have heard of AI Safety. I don’t feel confident whatsoever about the claim, but it now seems more than ~20% likely. Does this seem to be a reach?
I was watching an interview with that NYT reporter who had the newsworthy Bing chat interaction, and he used some language that made me think he’d searched for people talking about Bing chat and read Evan’s post or a direct derivative of it.
Basically yes, I’d say that AI safety is in fact in the overton window. What I see as the problem is more that a bunch of other stupid stuff is also in the overton window.
One can hope, although I see very little evidence for it.
Most evidence I see, is an educated and very intelligent person, writing about AI (not their field), and when reading it I could easily have been a chemist reading about how the 4 basic elements makes it abundantly clear that bla bla—you get the point.
And I don’t even know how to respond to that, the ontology displayed is to just fundamentally wrong, and tackling that feels like trying to explain differential equations to my 8 year old daughter (to the point where she grooks it).
There is also the problem of engaging such a person, its very easy to end up alienating them and just cementing their thinking.
That doesn’t mean I think it is not worth doing, but its not some casual off the cuff thing.
This is a pretty common problem. If anyone ever needs to explain AI safety to someone, with minimal risk of messing up, I think that giving them pages 137-149 from Toby Ord’s The Precipice is the best approach. It’s simple, one shot, and does everything right.