Code nowadays can do lots of things, from buying items to controlling machines. This presents code as a possible coordination mechanism, if you can get multiple people to agree on what code should be run in particular scenarios and situations, that can take actions on behalf of those people that might need to be coordinated.
This would require moving away from the “one person committing code and another person reviewing” code model.
This could start with many people reviewing the code, people could write their own test sets against the code or AI agents could be deputised to review the code (when that becomes feasible). Only when an agreed upon number of people thinking the code should it be merged into the main system.
Code would be automatically deployed, using gitops and the people administering the servers would be audited to make sure they didn’t interfere with running of the system without people noticing.
Code could replace regulation in fast moving scenarios, like AI. There might have to be legal contracts that you can’t deploy the agreed upon code or use the code by itself outside of the coordination mechanism.
Can you give a concrete example of a situation where you’d expect this sort of agreed-upon-by-multiple-parties code to be run, and what that code would be responsible for doing? I’m imagining something along the lines of “given a geographic boundary, determine which jurisdictions that boundary intersects for the purposes of various types of tax (sales, property, etc)”. But I don’t know if that’s wildly off from what you’re imagining.
Agreed code as coordination mechanism
Code nowadays can do lots of things, from buying items to controlling machines. This presents code as a possible coordination mechanism, if you can get multiple people to agree on what code should be run in particular scenarios and situations, that can take actions on behalf of those people that might need to be coordinated.
This would require moving away from the “one person committing code and another person reviewing” code model.
This could start with many people reviewing the code, people could write their own test sets against the code or AI agents could be deputised to review the code (when that becomes feasible). Only when an agreed upon number of people thinking the code should it be merged into the main system.
Code would be automatically deployed, using gitops and the people administering the servers would be audited to make sure they didn’t interfere with running of the system without people noticing.
Code could replace regulation in fast moving scenarios, like AI. There might have to be legal contracts that you can’t deploy the agreed upon code or use the code by itself outside of the coordination mechanism.
Can you give a concrete example of a situation where you’d expect this sort of agreed-upon-by-multiple-parties code to be run, and what that code would be responsible for doing? I’m imagining something along the lines of “given a geographic boundary, determine which jurisdictions that boundary intersects for the purposes of various types of tax (sales, property, etc)”. But I don’t know if that’s wildly off from what you’re imagining.
Looks like someone has worked on this kind of thing for different reasons https://www.worlddriven.org/
I was thinking of having evals that controlled deployment of LLMs could be something that needs multiple stakeholders to agree upon.
Butt really it is a general use pattern.