Programs you can examine without running are measure zero.
If you know of a proof of that, then I believe it, but it has no relevance to my argument because programmers do not choose programs at random from the space of possible programs: they very tightly limit their attention to those prospective programs that makes their job (of ensuring that the program has the properties they want it to have) as easy as possible.
I am not a mathematician, but a sketch of a proof would be like this: A program can be mapped into a string of symbols, and a random string of symbols is known to be incompressible. A syntactically valid program in a given language out to be mappable to a string, one valid syntactic statement at a time. Thus a random syntactically valid program is mappable to a random string and so is incompressible.
programmers do not choose programs at random from the space of possible programs: they very tightly limit their attention to those prospective programs that makes their job (of ensuring that the program has the properties they want them to have) as easy as possible.
Indeed we do. However, hitting a measure zero set is not easy, and any deviation from it lands you back in the poorly compressible or incompressible space, hence the pervasive bugs in all code, without exception, bugs you can only find by actually running the code. An ambitious program of only writing correct code (e.g. https://dl.acm.org/doi/10.1145/800027.808459) remains an elusive goal, probably because the aim is not achievable, though one can certainly reduce the odds of a program taking off into unintended and incompressible directions quite a lot, by employing good software development techniques.
Often a comment thread will wander to a topic that has no bearing on the OP. Has that happened here?
Does your most recent comment have any relevance to how much hope we humans should put in the fact that an AI cannot know for sure whether its sensory data has been faked?
If you know of a proof of that, then I believe it, but it has no relevance to my argument because programmers do not choose programs at random from the space of possible programs: they very tightly limit their attention to those prospective programs that makes their job (of ensuring that the program has the properties they want it to have) as easy as possible.
I am not a mathematician, but a sketch of a proof would be like this: A program can be mapped into a string of symbols, and a random string of symbols is known to be incompressible. A syntactically valid program in a given language out to be mappable to a string, one valid syntactic statement at a time. Thus a random syntactically valid program is mappable to a random string and so is incompressible.
Indeed we do. However, hitting a measure zero set is not easy, and any deviation from it lands you back in the poorly compressible or incompressible space, hence the pervasive bugs in all code, without exception, bugs you can only find by actually running the code. An ambitious program of only writing correct code (e.g. https://dl.acm.org/doi/10.1145/800027.808459) remains an elusive goal, probably because the aim is not achievable, though one can certainly reduce the odds of a program taking off into unintended and incompressible directions quite a lot, by employing good software development techniques.
Often a comment thread will wander to a topic that has no bearing on the OP. Has that happened here?
Does your most recent comment have any relevance to how much hope we humans should put in the fact that an AI cannot know for sure whether its sensory data has been faked?