If you read further, you can see how this is also passing the recursive buck.
You: “There are no clear separation between objects, I only use this to increase my utility function”
Me: “How are you deciding on where to stop dividing reality?”
You: “Well, I calculate my marginal utility from creating an additional concept and then Compare it to zer… ah, yeah, there is the recursive buck. It even capitalized as I said it.”
So yeah, while this is a desirable point to stop, this method still relies on your ability to Differentiate between usefulness of two models, and as far as I can tell, in the end, we can only feel it.
If you read the post I linked, it probably explains it better than I do—I’m just going off of my memory of the natural abstractions agenda. I think another aspect of it is that all sophisticated-enough minds will come up with the same natural abstractions, insofar as they’re natural.
In your example, you could get evidence that 0 and 1 voltages are natural abstractions in a toy setting by:
Training 100 neural networks to take the input voltages to a program and return the resulting output
Doing some mechanistic interpretability on them
Demonstrating that in every network, values below 2.5V are separated from values above 2.5V in some sense
I told in the text that I’m going to try to convey the “process” in the comments, and I’ll try to do it now.
all sophisticated-enough minds
I think that the recursive buck is passed to the word “enough”. You need to have stratification of sophistication of minds, and have a cutoff for when they reach acceptable level off sophistication.
I don’t think so, just say as prediction accuracy approaches 100%, the likelihood that the mind will use the natural abstraction increases, or something like that
If you read further, you can see how this is also passing the recursive buck.
So yeah, while this is a desirable point to stop, this method still relies on your ability to Differentiate between usefulness of two models, and as far as I can tell, in the end, we can only feel it.
If you read the post I linked, it probably explains it better than I do—I’m just going off of my memory of the natural abstractions agenda. I think another aspect of it is that all sophisticated-enough minds will come up with the same natural abstractions, insofar as they’re natural.
In your example, you could get evidence that 0 and 1 voltages are natural abstractions in a toy setting by:
Training 100 neural networks to take the input voltages to a program and return the resulting output
Doing some mechanistic interpretability on them
Demonstrating that in every network, values below 2.5V are separated from values above 2.5V in some sense
Thank’s for your answer, I will read linked post.
I told in the text that I’m going to try to convey the “process” in the comments, and I’ll try to do it now.
I think that the recursive buck is passed to the word “enough”. You need to have stratification of sophistication of minds, and have a cutoff for when they reach acceptable level off sophistication.
I don’t think so, just say as prediction accuracy approaches 100%, the likelihood that the mind will use the natural abstraction increases, or something like that