Robin, for some odd reason, it seems that a lot of fields in a lot of areas just analyze the abstractions they need for their own business, rather than the ones that you would need to analyze a self-improving AI.
I don’t know if anyone has previously asked whether natural selection runs into a law of diminishing returns. But I observe that the human brain is only four times as large as a chimp brain, not a thousand times as large. And that most of the architecture seems to be the same; but I’m not deep enough into that field to know whether someone has tried to determine whether there are a lot more genes involved. I do know that brain-related genes were under stronger positive selection in the hominid line, but not so much stronger as to imply that e.g. a thousand times as much selection pressure went into producing human brains from chimp brains as went into producing chimp brains in the first place. This is good enough to carry my point.
I’m not picking on endogenous growth, just using it as an example. I wouldn’t be at all surprised to find that it’s a fine theory. It’s just that, so far as I can tell, there’s some math tacked on that isn’t actually used anything, but provides a causal “good story” that doesn’t actually sound all that good if you happen to study idea generation on a more direct basis. I’m just using it to make the point—it’s not enough for an abstraction to fit the data, to be “verified”. One should actually be aware of how the data is constraining the abstraction. The recombinant growth notion is an example of an abstraction that fits, but isn’t constrained. And this is a general problem in futurism.
If you’re going to start criticizing the strength of abstractions, you should criticize your own abstractions as well. How constrained are they by the data, really? Is there more than one reasonable abstraction that fits the same data?
Talking about what a field uses as “standard” doesn’t seem like a satisfying response. Leaving aside that this is also the plea of those whose financial models don’t permit real estate prices to go down—“it’s industry standard, everyone is doing it”—what’s standard in one field may not be standard in another, and you should be careful when turning an old standard to a new purpose. Sticking with standard endogenous growth models would be one matter if you wanted to just look at a human economy investing a usual fraction of money in R&D; and another matter entirely if your real interest and major concern was how ideas scale in principle, for the sake of doing new calculations on what happens when you can buy research more cheaply.
There’s no free lunch in futurism—no simple rule you can follow to make sure that your own preferred abstractions will automatically come out on top.
Robin, for some odd reason, it seems that a lot of fields in a lot of areas just analyze the abstractions they need for their own business, rather than the ones that you would need to analyze a self-improving AI.
I don’t know if anyone has previously asked whether natural selection runs into a law of diminishing returns. But I observe that the human brain is only four times as large as a chimp brain, not a thousand times as large. And that most of the architecture seems to be the same; but I’m not deep enough into that field to know whether someone has tried to determine whether there are a lot more genes involved. I do know that brain-related genes were under stronger positive selection in the hominid line, but not so much stronger as to imply that e.g. a thousand times as much selection pressure went into producing human brains from chimp brains as went into producing chimp brains in the first place. This is good enough to carry my point.
I’m not picking on endogenous growth, just using it as an example. I wouldn’t be at all surprised to find that it’s a fine theory. It’s just that, so far as I can tell, there’s some math tacked on that isn’t actually used anything, but provides a causal “good story” that doesn’t actually sound all that good if you happen to study idea generation on a more direct basis. I’m just using it to make the point—it’s not enough for an abstraction to fit the data, to be “verified”. One should actually be aware of how the data is constraining the abstraction. The recombinant growth notion is an example of an abstraction that fits, but isn’t constrained. And this is a general problem in futurism.
If you’re going to start criticizing the strength of abstractions, you should criticize your own abstractions as well. How constrained are they by the data, really? Is there more than one reasonable abstraction that fits the same data?
Talking about what a field uses as “standard” doesn’t seem like a satisfying response. Leaving aside that this is also the plea of those whose financial models don’t permit real estate prices to go down—“it’s industry standard, everyone is doing it”—what’s standard in one field may not be standard in another, and you should be careful when turning an old standard to a new purpose. Sticking with standard endogenous growth models would be one matter if you wanted to just look at a human economy investing a usual fraction of money in R&D; and another matter entirely if your real interest and major concern was how ideas scale in principle, for the sake of doing new calculations on what happens when you can buy research more cheaply.
There’s no free lunch in futurism—no simple rule you can follow to make sure that your own preferred abstractions will automatically come out on top.