It’s probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who’s trying to build a friendly AI.
I can only infer what you were saying here but it seems likely that I roughly speaking approve of what you are saying. It is the sort of thing that people don’t consider rationally, instead going off the default reaction that fits a broad class of related ideas.
I can only infer what you were saying here but it seems likely that I roughly speaking approve of what you are saying. It is the sort of thing that people don’t consider rationally, instead going off the default reaction that fits a broad class of related ideas.