Unfortunately for this scheme, I would expect rendering time for AI videos to eventually be faster than real time. So, as the post implies, even if we had a reasonably good way to prove posteriority, this may not do to certify videos as “non-AI” for long.
On the other hand, as long as rendering AI videos is slower than real time, poof of priority alone might go a long way. You can often argue that prior to some point in time you couldn’t reasonably have known what kind of video you should fake.
The “analog requirement” reminds me of physical unclonable functions, which might have some cross-pollination with this issue. I couldn’t think of a way to make use of them but maybe someone else will.
Unfortunately for this scheme, I would expect rendering time for AI videos to eventually be faster than real time. So, as the post implies, even if we had a reasonably good way to prove posteriority, this may not do to certify videos as “non-AI” for long.
On the other hand, as long as rendering AI videos is slower than real time, poof of priority alone might go a long way. You can often argue that prior to some point in time you couldn’t reasonably have known what kind of video you should fake.
The “analog requirement” reminds me of physical unclonable functions, which might have some cross-pollination with this issue. I couldn’t think of a way to make use of them but maybe someone else will.