In AI software, we have to define an output type, e.g. a chatbot can generate text but not videos. Doesn’t this limit the danger of AIs? For example, if we build a classifier that estimates the probability of a given X-ray being abnormal, we know it can only provide numbers for doctors to take into consideration; it still doesn’t have the authority to decide the patient’s treatment. This means we can continue working on such software safely?
In AI software, we have to define an output type, e.g. a chatbot can generate text but not videos. Doesn’t this limit the danger of AIs? For example, if we build a classifier that estimates the probability of a given X-ray being abnormal, we know it can only provide numbers for doctors to take into consideration; it still doesn’t have the authority to decide the patient’s treatment. This means we can continue working on such software safely?
Even if you only work on an AI that tells doctors if someone has cancer, other people will still build an AGI