I can’t imagine any application where taking the sine of a dimensionful quantity would be useful.
Machine learning methods will go right ahead and apply whatever collection of functions they’re given in whatever way works to get empirically accurate predictions from the data. E.g. add the patient’s temperature to their pulse rate and divide by the cotangent of their age in decades, or whatever.
So it can certainly be useful. Whether it is meaningful is another matter, and touches on this conundrum again. What and whence is “understanding” in an AGI?
Eliezer wrote somewhere about hypothetically being able to deduce special relativity from seeing an apple fall. What sort of mechanism could do that? Where might it get the idea that adding temperature to pulse may be useful for making empirical predictions, but useless for “understanding what is happening”, and what does that quoted phrase mean, in terms that one could program into an AGI?
Machine learning methods will go right ahead and apply whatever collection of functions they’re given in whatever way works to get empirically accurate predictions from the data. E.g. add the patient’s temperature to their pulse rate and divide by the cotangent of their age in decades, or whatever.
So it can certainly be useful. Whether it is meaningful is another matter, and touches on this conundrum again. What and whence is “understanding” in an AGI?
Eliezer wrote somewhere about hypothetically being able to deduce special relativity from seeing an apple fall. What sort of mechanism could do that? Where might it get the idea that adding temperature to pulse may be useful for making empirical predictions, but useless for “understanding what is happening”, and what does that quoted phrase mean, in terms that one could program into an AGI?