That paper doesn’t seem to be arguing against Occam’s razor. Rather it seems to be making the more specific point that model complexity on training data doesn’t necessarily mean worse generalization error. I didn’t read through the whole article so I can’t say if the arguments make sense, but it seems that if you follow the procedure of updating your posteriors as new data arrives, the point is moot. Besides, the complexity prior framework doesn’t make that claim at all.
Perhaps you can comment this opinion that “simpler models are always more likely” is false: http://www2.denizyuret.com/ref/domingos/www.cs.washington.edu/homes/pedrod/papers/dmkd99.pdf
That paper doesn’t seem to be arguing against Occam’s razor. Rather it seems to be making the more specific point that model complexity on training data doesn’t necessarily mean worse generalization error. I didn’t read through the whole article so I can’t say if the arguments make sense, but it seems that if you follow the procedure of updating your posteriors as new data arrives, the point is moot. Besides, the complexity prior framework doesn’t make that claim at all.