We might develop schemes for auditable computation, where one party can come in at any time and check the other party’s logs. They should conform to the source code that the second party is supposed to be running; and also to any observable behavior that the second party has displayed. It’s probably possible to have logging and behavioral signalling be sufficiently rich that the first party can be convinced that that code is indeed being run (without it being too hard to check—maybe with some kind of probabilistically checkable proof).
However, this only provides a positive proof that certain code is being run, not a negative proof that no other code is being run at the same time. This part, I think, inherently requires knowing something about the other party’s computational resources. But if you can know about those, then it’s possible it might be possible. For a perhaps dystopian example, if you know your counterparty has compute A, and the program you want them to run takes compute B, then you could demand they do something (difficult but easily checkable) like invert hash functions, that’ll soak up around A-B of their compute, so they have nothing left over to do anything secret with.
Can the agent just mute their capabilities when they do this computation? There are very slick ways to speed up computation and likewise slick ways to slow down computation. The agent could say, mess up cache coherency in its hardware, store data types differently, ignore the outputs of some of its compute, or maybe run faster than the other agent expects by devising a faster algorithm, using hardware level optimizations that use strange physics the other agent hasn’t thought of, etc.
Secondly, how would an agent convince another to run expensive code that takes up their entire compute? If you are some nation in medieval Europe, and another adjacent nation demanded every able bodied person to enter a triathlon to measure their net strength, would any sane leader agree to that?
Yup, all that would certainly make it more complicated. In a regime where this kind of tightly-controlled delegation were really important, we might also demand our counterparties standardize their hardware so they can’t play tricks like this.
I was picturing a more power-asymmetric situation, more like a feudal lord giving his vassals lots of busywork so they don’t have time to plot anything.
In the soaking-up-extra-compute case? Yeah, for sure, I can only really picture it (a) on a very short-term basis, for example maybe while linking up tightly for important negotations (but even here, not very likely). Or (b) in a situation with high power asymmetry. For example maybe there’s a story where ‘lords’ delegate work to their ‘vassals’, but the workload intensity is variable, so the vassals have leftover compute, and the lords demand that they spend it on something like blockchain mining. To compensate for the vulnerability this induces, the lords would also provide protection.
We might develop schemes for auditable computation, where one party can come in at any time and check the other party’s logs. They should conform to the source code that the second party is supposed to be running; and also to any observable behavior that the second party has displayed. It’s probably possible to have logging and behavioral signalling be sufficiently rich that the first party can be convinced that that code is indeed being run (without it being too hard to check—maybe with some kind of probabilistically checkable proof).
However, this only provides a positive proof that certain code is being run, not a negative proof that no other code is being run at the same time. This part, I think, inherently requires knowing something about the other party’s computational resources. But if you can know about those, then
it’s possibleit might be possible. For a perhaps dystopian example, if you know your counterparty has compute A, and the program you want them to run takes compute B, then you could demand they do something (difficult but easily checkable) like invert hash functions, that’ll soak up around A-B of their compute, so they have nothing left over to do anything secret with.Can the agent just mute their capabilities when they do this computation? There are very slick ways to speed up computation and likewise slick ways to slow down computation. The agent could say, mess up cache coherency in its hardware, store data types differently, ignore the outputs of some of its compute, or maybe run faster than the other agent expects by devising a faster algorithm, using hardware level optimizations that use strange physics the other agent hasn’t thought of, etc.
Secondly, how would an agent convince another to run expensive code that takes up their entire compute? If you are some nation in medieval Europe, and another adjacent nation demanded every able bodied person to enter a triathlon to measure their net strength, would any sane leader agree to that?
Yup, all that would certainly make it more complicated. In a regime where this kind of tightly-controlled delegation were really important, we might also demand our counterparties standardize their hardware so they can’t play tricks like this.
I was picturing a more power-asymmetric situation, more like a feudal lord giving his vassals lots of busywork so they don’t have time to plot anything.
Wouldn’t that also leave them pretty vulnerable?
In the soaking-up-extra-compute case? Yeah, for sure, I can only really picture it (a) on a very short-term basis, for example maybe while linking up tightly for important negotations (but even here, not very likely). Or (b) in a situation with high power asymmetry. For example maybe there’s a story where ‘lords’ delegate work to their ‘vassals’, but the workload intensity is variable, so the vassals have leftover compute, and the lords demand that they spend it on something like blockchain mining. To compensate for the vulnerability this induces, the lords would also provide protection.