Another possible problem is that the different colored parts in the background would have to big enough and different coloured enough to store the information even after the screenshot is uploaded to different sites that use various compression algorithms for images.
Well, this is a problem for my approach.
Let’s estimate useful screen size as 1200x1080, 6 messages visible—that gives around 210K pixels per message. Then, according to [Remarks 1-18 on GPT](https://www.lesswrong.com/posts/7qSHKYRnqyrumEfbt/remarks-1-18-on-gpt-compressed), input state takes at least log2(50257)*2048= 32K bits. If we use 16 distinct colors for background (I believe there is a way to make 16-color palette look nice) we get 4 bits of information per pixel, so we only have 210K * 4 / 32K = 26-27 pixels for each chunk, which is rather small so after compression it wouldn’t be easy to restore original bits.
So, probably OpenAI could encode hash of GPT’s input, and that would require much less data. Though, this would make it hard to prove that prompt matches the screenshot...
Well, this is a problem for my approach.
Let’s estimate useful screen size as 1200x1080, 6 messages visible—that gives around 210K pixels per message. Then, according to [Remarks 1-18 on GPT](https://www.lesswrong.com/posts/7qSHKYRnqyrumEfbt/remarks-1-18-on-gpt-compressed), input state takes at least
log2(50257)*2048
= 32K bits. If we use 16 distinct colors for background (I believe there is a way to make 16-color palette look nice) we get 4 bits of information per pixel, so we only have210K * 4 / 32K
= 26-27 pixels for each chunk, which is rather small so after compression it wouldn’t be easy to restore original bits.So, probably OpenAI could encode hash of GPT’s input, and that would require much less data. Though, this would make it hard to prove that prompt matches the screenshot...