outside that bubble people still don’t know or have confused ideas about how it’s dangerous, even among the group of people weird enough to work on AGI instead of more academically respectable, narrow AI.
I agree. I run a local AI Safety Meetup and it’s frustrating to see that the ones who better understand the discussed concepts consider that Safety is way less interesting/important than AGI Capabilities research. I remember someone saying something like: “Ok, this Safety thing is kind of interesting, but who would be interested in working on real AGI problems” and the other guys noding. What they say:
“I’ll start an AGI research lab. When I feel we’re close enough to AGI I’ll consider Safety.”
“It’s difficult to do significant research on Safety without knowing a lot about AI in general.”
I agree. I run a local AI Safety Meetup and it’s frustrating to see that the ones who better understand the discussed concepts consider that Safety is way less interesting/important than AGI Capabilities research. I remember someone saying something like: “Ok, this Safety thing is kind of interesting, but who would be interested in working on real AGI problems” and the other guys noding. What they say:
“I’ll start an AGI research lab. When I feel we’re close enough to AGI I’ll consider Safety.”
“It’s difficult to do significant research on Safety without knowing a lot about AI in general.”