I think you’re mostly just denying the premise here.
I’m arguing that the premise is itself unrealistic, yes. If you assume false, then you can ‘prove’ anything.
It’s like saying “assume you had a Halting oracle”. It can be a useful tool in some cases where you can e.g. show that even with something unrealistic you can’t do X, but that’s about it.
On the other hand, we can’t use any encoding we please, since we need the 1850 physicists to be able to decode it.
Fair. That being said, you can still do significantly better than 520 bits.
So what kind of encoding could we write today that would let a hypothesized superintelligent AI communicate as much value to us as possible in as few bits as possible?
I mean, we already kind of have a general answer here from Kolmogorov complexity / Minimum Description Length / friends. “Give me a Binary Lambda Calculus[1] program that when run gives us as much value as possible”.
Don’t split into multiple separate queries. Figure out your risk level in # of bits, and ask for a single program that is that many bits.
This requires no more than o(1) additional bits beyond the optimal for any possible output[2].
Of course, said o(1) potentially hides a large constant, but empirically BLC is fairly compact. There’s still few hundred bits of overhead, however, which isn’t great.
I’m arguing that the premise is itself unrealistic, yes. If you assume false, then you can ‘prove’ anything.
It’s like saying “assume you had a Halting oracle”. It can be a useful tool in some cases where you can e.g. show that even with something unrealistic you can’t do X, but that’s about it.
Fair. That being said, you can still do significantly better than 520 bits.
I mean, we already kind of have a general answer here from Kolmogorov complexity / Minimum Description Length / friends. “Give me a Binary Lambda Calculus[1] program that when run gives us as much value as possible”.
Don’t split into multiple separate queries. Figure out your risk level in # of bits, and ask for a single program that is that many bits.
This requires no more than o(1) additional bits beyond the optimal for any possible output[2].
https://web.archive.org/web/20161019165606/https://en.wikipedia.org/wiki/Binary_lambda_calculus[3]
Of course, said o(1) potentially hides a large constant, but empirically BLC is fairly compact. There’s still few hundred bits of overhead, however, which isn’t great.
The deletionists have won :-(