This is a very promising start on some thesis (that could go further into the theory of computation/sid mani content/https://lifeiscomputation.com/), but the “intelligence growth curves” are not very intuitive. I wager that dimensionality is more important than number of elements in determining intelligence growth curves and especially number of discontinuous jumps.
Why does F^4_65′s intelligence peak out at such a low value at time 2040? Why does F165537’s intelligence peak out at a lower value than equal-dimensional fields with fewer elements in them?
at some point it may have to incorporate quality/diversity/taste, not just size
Hm, I had kind of hoped that this post would be memory-holed/I could do a major rewrite/re-analysis, since I don’t really endorse it that much anymore.
I agree it’s not very intuitive, and I’d want to re-run this with many many more samples over different dimensionalities (not just ~10), and then calculate some statistics on the resulting data. (A little problem is that the version of the diamond-square algorithm I’ve written is O(2d) in the number of dimensions, which means I can’t collect that many samples of different spaces for high dimensions).
But this is also not ideal, because the values produced by the diamond-square algorithm are way too normally distributed—it’s not the case that most algorithms are okay, and some are better; most learning architectures/possible minds are absolute garbage, with few great ones thrown in. The thing that IQ is measuring is probably lognormal, and the values looked at here aren’t.
But even that is not good, because using a metric-ish space instead of a tree-like structure is the wrong way of going about it, but that’d be a bit cumbersome to do, and I don’t have steam for it, so I don’t.
This is a very promising start on some thesis (that could go further into the theory of computation/sid mani content/https://lifeiscomputation.com/), but the “intelligence growth curves” are not very intuitive. I wager that dimensionality is more important than number of elements in determining intelligence growth curves and especially number of discontinuous jumps.
Why does F^4_65′s intelligence peak out at such a low value at time 2040? Why does F165537’s intelligence peak out at a lower value than equal-dimensional fields with fewer elements in them?
at some point it may have to incorporate quality/diversity/taste, not just size
Hm, I had kind of hoped that this post would be memory-holed/I could do a major rewrite/re-analysis, since I don’t really endorse it that much anymore.
I agree it’s not very intuitive, and I’d want to re-run this with many many more samples over different dimensionalities (not just ~10), and then calculate some statistics on the resulting data. (A little problem is that the version of the diamond-square algorithm I’ve written is O(2d) in the number of dimensions, which means I can’t collect that many samples of different spaces for high dimensions).
But this is also not ideal, because the values produced by the diamond-square algorithm are way too normally distributed—it’s not the case that most algorithms are okay, and some are better; most learning architectures/possible minds are absolute garbage, with few great ones thrown in. The thing that IQ is measuring is probably lognormal, and the values looked at here aren’t.
But even that is not good, because using a metric-ish space instead of a tree-like structure is the wrong way of going about it, but that’d be a bit cumbersome to do, and I don’t have steam for it, so I don’t.
¯\(ツ)/¯
Why is thing IQ measuring mostly lognormal