I fear that measuring modifications it’s like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications. Also it’s not clear how hard some modifications will be without actually carrying out those modifications.
Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals?
I’m working on this topic too, I will PM you.
Also feel free to reach out if topic is of interest.
Yes, it’s still unclear how to measure modification magnitude in general (or if that’s even possible to do in a principled way) but for modifications which are limited to text, you could use the entropy of the text and to me that seems like a fairly reasonable and somewhat fundamental measure (according to information theory). Thank you for the references in your other comment, I’ll make sure to give them a read!
I fear that measuring modifications it’s like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications. Also it’s not clear how hard some modifications will be without actually carrying out those modifications.
Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals?
I’m working on this topic too, I will PM you.
Also feel free to reach out if topic is of interest.
Yes, it’s still unclear how to measure modification magnitude in general (or if that’s even possible to do in a principled way) but for modifications which are limited to text, you could use the entropy of the text and to me that seems like a fairly reasonable and somewhat fundamental measure (according to information theory). Thank you for the references in your other comment, I’ll make sure to give them a read!