Robin, I just read through that paper. Unless I missed something, you do not discuss, or even mention as a possibility, the effect of having around minds that are faster than human. You’re just making a supply of em labor cheaper over time due to Moore’s Law treated as an exogenous growth factor. Do you see why I might not think that this model was even remotely on the right track?
So… to what degree would you call the abstractions in your model, standard and vetted?
How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes “unvetted”, a “new abstraction”?
And if I devised a model that was no more different from the standard—departed by no more additional assumptions—than this one, which described the effect of faster researchers, would it be just as good, in your eyes?
Because there’s a very simple and obvious model of what happens when your researchers obey Moore’s Law, which makes even fewer new assumptions, and adds fewer terms to the equations...
You understand that if we’re to have a standard that excludes some new ideas as being too easy to make up, then—even if we grant this standard—it’s very important to ensure that standard is being applied evenhandedly, and not just selectively to exclude models that arrive at the wrong conclusions, because only in the latter case does it seem “obvious” that the new model is “unvetted”. Do you know the criterion—can you say it aloud for all to hear—that you use to determine whether a model is based on vetted abstractions?
Robin, I just read through that paper. Unless I missed something, you do not discuss, or even mention as a possibility, the effect of having around minds that are faster than human. You’re just making a supply of em labor cheaper over time due to Moore’s Law treated as an exogenous growth factor. Do you see why I might not think that this model was even remotely on the right track?
So… to what degree would you call the abstractions in your model, standard and vetted?
How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes “unvetted”, a “new abstraction”?
And if I devised a model that was no more different from the standard—departed by no more additional assumptions—than this one, which described the effect of faster researchers, would it be just as good, in your eyes?
Because there’s a very simple and obvious model of what happens when your researchers obey Moore’s Law, which makes even fewer new assumptions, and adds fewer terms to the equations...
You understand that if we’re to have a standard that excludes some new ideas as being too easy to make up, then—even if we grant this standard—it’s very important to ensure that standard is being applied evenhandedly, and not just selectively to exclude models that arrive at the wrong conclusions, because only in the latter case does it seem “obvious” that the new model is “unvetted”. Do you know the criterion—can you say it aloud for all to hear—that you use to determine whether a model is based on vetted abstractions?