I agree that establishing a cooperative mindset in the AI / ML community is very important. I’m less sure if economic incentives or government policy are a realistic way to get there. Can you think of a precedent or example for such external incentives in other areas?
Also, collaboration between the researchers that develop AI may be just one piece of the puzzle. You could still get military arms races between nations even if most researchers are collaborative. If there are several AI systems, then we also need to ensure cooperation between these AIs, which isn’t necessarily the same as cooperation between the researchers that build them.
Can you think of a precedent or example for such external incentives in other areas?
Good question. I was somewhat inspired by civil engineering, where it’s my understanding that there is a rather strong culture of safety, driven in part by various historical accidents that killed a lot of people and caught the attention of regulators / insurers / etc. I don’t actually know exactly how many of the resulting reforms were a result of external pressure vs. people just generally shaping up and not wanting to kill more people, but given how much good intentions may be neglected in the face of bad incentives (AFAIK, several historical accidents [e.g.] were known to be disasters just waiting to happen well ahead of time), I would guess that external incentives / consequences have played a major role in them.
I agree that establishing a cooperative mindset in the AI / ML community is very important. I’m less sure if economic incentives or government policy are a realistic way to get there. Can you think of a precedent or example for such external incentives in other areas?
Also, collaboration between the researchers that develop AI may be just one piece of the puzzle. You could still get military arms races between nations even if most researchers are collaborative. If there are several AI systems, then we also need to ensure cooperation between these AIs, which isn’t necessarily the same as cooperation between the researchers that build them.
Good question. I was somewhat inspired by civil engineering, where it’s my understanding that there is a rather strong culture of safety, driven in part by various historical accidents that killed a lot of people and caught the attention of regulators / insurers / etc. I don’t actually know exactly how many of the resulting reforms were a result of external pressure vs. people just generally shaping up and not wanting to kill more people, but given how much good intentions may be neglected in the face of bad incentives (AFAIK, several historical accidents [e.g.] were known to be disasters just waiting to happen well ahead of time), I would guess that external incentives / consequences have played a major role in them.
Neat paper, congrats!
(Btw I think you may have switched your notation from theta to x in section 5.)