AI safety includes unintended consequences of non-sentient systems. That ambiguity creates confusion in the discussion. I’ve been using AGI x-risk as a clumsy way to point to what I’m trying to research. Artificial Intention research does the same thing, but without broadcasting conclusions as part of the endeavor.
Leaving out the “artificial intelligence” seems questionable, as does adopting the same abbreviation, “AI”, for both. So I’d suggest AI intention research, AII. Wait, nevermind :). Other ideas?
AI safety includes unintended consequences of non-sentient systems. That ambiguity creates confusion in the discussion. I’ve been using AGI x-risk as a clumsy way to point to what I’m trying to research. Artificial Intention research does the same thing, but without broadcasting conclusions as part of the endeavor.
Leaving out the “artificial intelligence” seems questionable, as does adopting the same abbreviation, “AI”, for both. So I’d suggest AI intention research, AII. Wait, nevermind :). Other ideas?