I like this dichotomy. I’ve been saying for a bit that I don’t think “companies that only commercialise existing models and don’t do anything that pushes forward the frontier” aren’t meaningfully increasing x-risk. This is a long and unwieldy statement—I prefer “AI product companies” as a shorthand.
For a concrete example, I think that working on AI capabilities as an upskilling method for alignment is a bad idea, but working on AI products as an upskilling method for alignment would be fine.
I like this dichotomy. I’ve been saying for a bit that I don’t think “companies that only commercialise existing models and don’t do anything that pushes forward the frontier” aren’t meaningfully increasing x-risk. This is a long and unwieldy statement—I prefer “AI product companies” as a shorthand.
For a concrete example, I think that working on AI capabilities as an upskilling method for alignment is a bad idea, but working on AI products as an upskilling method for alignment would be fine.