I think you’re right that the incentive structure around AI safety is important for getting those doing the work to do it as well as they can. I think there might be something to be said for the suggestion of moving to a cash payment over equity but think that needs a lot more development.
For instnace, if everyone is paid up front for the work they are doing to protect the world from some AI takeover in the future, then they are no longer tied to that future in terms of their current state. That might not produce any better results than equity that could decline in value in the future.
Ultimately the goal has to be that those able to and doing the work have a pretty tight personal interests stance on future state of AI. It might even be the case that such a research effort alignment is only loosely connected to compensation.
Additionally, as you note, it’s not entirely those working to limit a bad outcome for humans in general from AGI but also what the incentives are for the companies as a whole. Here I think the discussion regarding AI liabilities and insurance might matter more. Which also opens up a whole question about corporate law. Years ago, pre 1930s, banking law used to hold the bankers liable for twice the losses from bank failures to make them better at risk management with other peoples’ money. That seems to have been a special case that didn’t apply to other businesses even if they were largely owned by outsiders. Perhaps making those who are in conrtol of AI development and deplyment, or are largely the ones financing the efforts, personally responsibile might be a better incentive structure.
All of these are difficult to work though to get a good, and fair, structure in place. I don’t think any one approach will ultimately be the solution but all or some combination of them might be. But I also think it’s a given that the risk will always remain. So figuring out just what level of risk as acceptable is also needed, and problematic in its own way.
I think you’re right that the incentive structure around AI safety is important for getting those doing the work to do it as well as they can. I think there might be something to be said for the suggestion of moving to a cash payment over equity but think that needs a lot more development.
For instnace, if everyone is paid up front for the work they are doing to protect the world from some AI takeover in the future, then they are no longer tied to that future in terms of their current state. That might not produce any better results than equity that could decline in value in the future.
Ultimately the goal has to be that those able to and doing the work have a pretty tight personal interests stance on future state of AI. It might even be the case that such a research effort alignment is only loosely connected to compensation.
Additionally, as you note, it’s not entirely those working to limit a bad outcome for humans in general from AGI but also what the incentives are for the companies as a whole. Here I think the discussion regarding AI liabilities and insurance might matter more. Which also opens up a whole question about corporate law. Years ago, pre 1930s, banking law used to hold the bankers liable for twice the losses from bank failures to make them better at risk management with other peoples’ money. That seems to have been a special case that didn’t apply to other businesses even if they were largely owned by outsiders. Perhaps making those who are in conrtol of AI development and deplyment, or are largely the ones financing the efforts, personally responsibile might be a better incentive structure.
All of these are difficult to work though to get a good, and fair, structure in place. I don’t think any one approach will ultimately be the solution but all or some combination of them might be. But I also think it’s a given that the risk will always remain. So figuring out just what level of risk as acceptable is also needed, and problematic in its own way.