Which argument, if you accept it, has the inevitable consequence that the safest and most moral pivotal act is to solve the alignment problem, solve AI, then post both of them to github in the same short span of a few days.
Which argument, if you accept it, has the inevitable consequence that the safest and most moral pivotal act is to solve the alignment problem, solve AI, then post both of them to github in the same short span of a few days.