I am not sure you are taking into account the possibility that an intelligence may yield optimal performance within a specific recource-range. Would a human mind given a 10x increase in memmory (and memmories) opperate even marginally better? Or would it be overwhelmed by an amount of information it was not prepared for? Similarly, would a human mind even be able to operate given half the computational resources? In comparing mind A with 40bits/1trillionFPO with the Mind B of 80bits/2trillionFPO may be a matter of how many resources are available, since we don’t have any datapoints about how much they each yield given the other’s resources.
So perhaps the trendy term of scalability might be one dimension of the intelligence metric you seek. Can a mind take advantage of additional resources if they are made available? I suspect that an intelligence A that can scale up and down (to a specific minimum) linearly may be thought of as superior to an intelligence B that may yield a higher optimization output for a specific amount of resources but is unable to scale up or down.
I am not sure you are taking into account the possibility that an intelligence may yield optimal performance within a specific recource-range. Would a human mind given a 10x increase in memmory (and memmories) opperate even marginally better? Or would it be overwhelmed by an amount of information it was not prepared for? Similarly, would a human mind even be able to operate given half the computational resources? In comparing mind A with 40bits/1trillionFPO with the Mind B of 80bits/2trillionFPO may be a matter of how many resources are available, since we don’t have any datapoints about how much they each yield given the other’s resources.
So perhaps the trendy term of scalability might be one dimension of the intelligence metric you seek. Can a mind take advantage of additional resources if they are made available? I suspect that an intelligence A that can scale up and down (to a specific minimum) linearly may be thought of as superior to an intelligence B that may yield a higher optimization output for a specific amount of resources but is unable to scale up or down.