I likewise thought the post would consist of people trying increasingly more sophisticated approaches and always failing because of messy implementational details.
In e. g. lab experiments, you get to control the experimental setup and painstakingly optimize it to conform to whatever idealized conditions your equations are adapted for. Similar is often done in industry: we often try to screen away the messiness, either by transforming the environments our technologies are deployed in (roads for cars), or by making the technology’s performance ignore rather than adapt to the messiness (planes ignoring the ground conditions entirely). I expected the point of the exercise to be showing what it looks like when you’re exposed to reality’s raw messiness unprotected, even in an experimental setup as conceptually simple and well-understood as that.
And with “do it on the first try” on top...
But it sounds like there was a non-negligible success rate? That’s a positive surprise for me.
(Although I guess Robert’s trick is kind of “screening away the messiness”, in that he gets to ignore the ramp’s complicated mechanics and just grab the only bit of data he needs. Kinda interested what the actual success rate on this workshop was and what strategies the winners tried. @johnswentworth?)
Kinda interested what the actual success rate on this workshop was and what strategies the winners tried.
Success rate is ~5-15%. Half of that is people who basically get lucky—the most notable such occasion was someone who did the simplest possible calculation, but dropped a factor of 2 at one point, and that just happened to work perfectly with that day’s ramp setup.
Estimating the ball’s speed from video is the main predictor of success; people who’ve done that have something like a 50% success rate (n=4 IIRC). So people do still fail using that approach—for instance, I had one group take the speed they estimated from the video, and the speed they estimated from the energy calculation, and average them together, basically as a compromise between two people within the group. Another had the general right idea but just didn’t execute very well.
Notably, the ball does consistently land in the same spot, so if one executes the right strategy well then basically-zero luck is required.
I expected the point of the exercise to be showing what it looks like when you’re exposed to reality’s raw messiness unprotected, even in an experimental setup as conceptually simple and well-understood as that.
for instance, I had one group take the speed they estimated from the video, and the speed they estimated from the energy calculation, and average them together, basically as a compromise between two people within the group
If you only care about betting odds, then feel free to average together mutually incompatible distributions reflecting mutually exclusive world-models. If you care about planning then you actually have to decide which model is right or else plan carefully for either outcome.
Kinda. Part of the lesson here is only the velocity vector on ramp exit matters. At these speeds air resistance is negligible. The problem subdivides.
But the other part is that you had to measure it separated from the complex part—the actual flexible plastic ramp someone built. Forget doing it on paper, or having a 30 year accelerator ramp moratorium.
I likewise thought the post would consist of people trying increasingly more sophisticated approaches and always failing because of messy implementational details.
In e. g. lab experiments, you get to control the experimental setup and painstakingly optimize it to conform to whatever idealized conditions your equations are adapted for. Similar is often done in industry: we often try to screen away the messiness, either by transforming the environments our technologies are deployed in (roads for cars), or by making the technology’s performance ignore rather than adapt to the messiness (planes ignoring the ground conditions entirely). I expected the point of the exercise to be showing what it looks like when you’re exposed to reality’s raw messiness unprotected, even in an experimental setup as conceptually simple and well-understood as that.
And with “do it on the first try” on top...
But it sounds like there was a non-negligible success rate? That’s a positive surprise for me.
(Although I guess Robert’s trick is kind of “screening away the messiness”, in that he gets to ignore the ramp’s complicated mechanics and just grab the only bit of data he needs. Kinda interested what the actual success rate on this workshop was and what strategies the winners tried. @johnswentworth?)
Success rate is ~5-15%. Half of that is people who basically get lucky—the most notable such occasion was someone who did the simplest possible calculation, but dropped a factor of 2 at one point, and that just happened to work perfectly with that day’s ramp setup.
Estimating the ball’s speed from video is the main predictor of success; people who’ve done that have something like a 50% success rate (n=4 IIRC). So people do still fail using that approach—for instance, I had one group take the speed they estimated from the video, and the speed they estimated from the energy calculation, and average them together, basically as a compromise between two people within the group. Another had the general right idea but just didn’t execute very well.
Notably, the ball does consistently land in the same spot, so if one executes the right strategy well then basically-zero luck is required.
Yup, that is indeed the point.
… Which is a whole different lesson:
Kinda. Part of the lesson here is only the velocity vector on ramp exit matters. At these speeds air resistance is negligible. The problem subdivides.
But the other part is that you had to measure it separated from the complex part—the actual flexible plastic ramp someone built. Forget doing it on paper, or having a 30 year accelerator ramp moratorium.