You’re not going to get useful results if you lump together people working on AGI and people working on AGI safety. Mixing AI work and safety with AGI work and safety will be similarly baffling.
They each have very different implications.
Those working on AI and AGI think they’re building the most useful technology the world has ever seen.
Those working on AI safety think we have a new technology with dangers and opportunities, like every previous one. They’re probably pleased to be doing important work.
Those working on AGI safety think there’s a very good chance the world ends soon because of unpreventable human idiocy. That can really get them down.
I can’t tell which category you’re interested in, or whether you can distinguish these very different viewpoints in your data.
You’re not going to get useful results if you lump together people working on AGI and people working on AGI safety. Mixing AI work and safety with AGI work and safety will be similarly baffling.
They each have very different implications.
Those working on AI and AGI think they’re building the most useful technology the world has ever seen.
Those working on AI safety think we have a new technology with dangers and opportunities, like every previous one. They’re probably pleased to be doing important work.
Those working on AGI safety think there’s a very good chance the world ends soon because of unpreventable human idiocy. That can really get them down.
I can’t tell which category you’re interested in, or whether you can distinguish these very different viewpoints in your data.