How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
No, but it can probably be dug out of Google Analytics. I’ll let the experiment finish first.
I’m not sure how exactly it is calculated. On what is apparently an official blog, the author says in a comment: “We do correct for multiple comparisons using the Bonferroni adjustment. We’ve looked into others, but they don’t offer that much more improvement over this conservative approach.”
Yes, I’m finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was “just right”. Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I’m starting to wonder if maybe there’s 2 distinct populations of users—maybe desktop users with wide screens and then smartphones? Doesn’t quite make sense since the phones should be setting their own width but...
A bimodal distribution wouldn’t surprise me. What I don’t believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!
Do you know the size of your readers’ windows?
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
No, but it can probably be dug out of Google Analytics. I’ll let the experiment finish first.
I’m not sure how exactly it is calculated. On what is apparently an official blog, the author says in a comment: “We do correct for multiple comparisons using the Bonferroni adjustment. We’ve looked into others, but they don’t offer that much more improvement over this conservative approach.”
Yes, I’m finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was “just right”. Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I’m starting to wonder if maybe there’s 2 distinct populations of users—maybe desktop users with wide screens and then smartphones? Doesn’t quite make sense since the phones should be setting their own width but...
A bimodal distribution wouldn’t surprise me. What I don’t believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!