The global meaning of 1027, then, is given by those axioms.
What counts as an axiom? You could as well burn them instead of appraising their correctness. There are many ways of representing knowledge of an abstract fact, but those representations won’t themselves embody the fact, there is always an additional step where you have an interpretation in mind, so that the representation only matters as a reference to the fact through your interpretation, or a reflection of that fact in a different form.
It might be useful to have a concrete representation, as it can be used as an element of a plan and acted upon, while an abstract fact isn’t readily available for that. For example, if your calculator (or brain) declares that “12*12<150” is true, its decision can be turned into action. 1027 items could be lined up in a field, so that you can visually (or by running from one side to the other) appreciate the amount. Alternatively, a representation of a reasoning process can be checked for errors, yielding a more reliable conclusion. But you never reach the fact itself, with the rare exception of physical facts that are interesting in themselves and not as tools for inferring or representing some other facts, physical or not (then a moment passes, and you can only hold to a memory).
You can’t get 1027 itself out of an extrapolated volition procedure, or any other procedure. All you can get (or have in your brain) is a representation, that is only meaningful to the extent you expect it to be related to the answer.
Similarly, if you want to get information about morality, all you can get is an answer that would need to be further interpreted. As a special exception (that is particularly relevant for morality and FAI), you can get the actual right actions getting done, so that no further interpretation is necessary, but you still won’t produce the idea of morality itself.
What counts as an axiom? You could as well burn them instead of appraising their correctness. There are many ways of representing knowledge of an abstract fact, but those representations won’t themselves embody the fact, there is always an additional step where you have an interpretation in mind, so that the representation only matters as a reference to the fact through your interpretation, or a reflection of that fact in a different form.
It might be useful to have a concrete representation, as it can be used as an element of a plan and acted upon, while an abstract fact isn’t readily available for that. For example, if your calculator (or brain) declares that “12*12<150” is true, its decision can be turned into action. 1027 items could be lined up in a field, so that you can visually (or by running from one side to the other) appreciate the amount. Alternatively, a representation of a reasoning process can be checked for errors, yielding a more reliable conclusion. But you never reach the fact itself, with the rare exception of physical facts that are interesting in themselves and not as tools for inferring or representing some other facts, physical or not (then a moment passes, and you can only hold to a memory).
I don’t understand what point you’re making here.
You can’t get 1027 itself out of an extrapolated volition procedure, or any other procedure. All you can get (or have in your brain) is a representation, that is only meaningful to the extent you expect it to be related to the answer.
Similarly, if you want to get information about morality, all you can get is an answer that would need to be further interpreted. As a special exception (that is particularly relevant for morality and FAI), you can get the actual right actions getting done, so that no further interpretation is necessary, but you still won’t produce the idea of morality itself.