One of the most common objection’s I’ve seen is that we’re too far from getting AGI to know what AGI will be like, so we can’t productively work on the problem without making a lot of conjunctive assumptions—e.g. see this post.
One of the most common objection’s I’ve seen is that we’re too far from getting AGI to know what AGI will be like, so we can’t productively work on the problem without making a lot of conjunctive assumptions—e.g. see this post.