Is this something that you’ve changed your mind on recently, or have I just misunderstood your previous stance? I don’t know if it would be polite to throw old quotes off Discord at you, but my understanding is that you expected most model differences vanished in the limit, and that convolutions and RNNs and whatnot might well have held up fine with only minor tweaks to remove scaling bottlenecks.
I bring this up because that stance I thought you had seems to agree with Paul, whereas now you seem to disagree with him.
Is this something that you’ve changed your mind on recently, or have I just misunderstood your previous stance? I don’t know if it would be polite to throw old quotes off Discord at you, but my understanding is that you expected most model differences vanished in the limit, and that convolutions and RNNs and whatnot might well have held up fine with only minor tweaks to remove scaling bottlenecks.
I bring this up because that stance I thought you had seems to agree with Paul, whereas now you seem to disagree with him.