I’ve had thoughts along similar lines, but worry that there is no clear line between safer narrower less-useful less-profitable AI and riskier more-profitable more-general AI. Seems like a really slippery slope with a lot of motivation for relevant actors to engage in motivated thinking to rationalize their actions.
I’ve had thoughts along similar lines, but worry that there is no clear line between safer narrower less-useful less-profitable AI and riskier more-profitable more-general AI. Seems like a really slippery slope with a lot of motivation for relevant actors to engage in motivated thinking to rationalize their actions.