(completely ignores the point in favour of nitpicking)
Why is Robert’s final gut-instinct adjustment is to move the cup further away, rather than closer? One of the potential gotchas (= crucial mechanical details one may overlook) here seems like the following:
Pure ballistic calculations would give you the distance the ball would travel before hitting the ground under the assumption of an unobstructed fall. But for certain (trajectory, cup height) pairs, it may instead end up hitting the outer edge of the cup on the way down, and making the cup fall over/bouncing off it. I assume it wasn’t the problem in the given setup, but I see the potential for an instinctive last-minute correction based on intuiting that (and so wanting the cup a bit closer).
What’s the intuition for moving the cup further away, though? I don’t… really see what intuitable detail you can miss here that can lead to you overshooting.
(Also, Robert’s trick with reading launch speed off the video seems to ignore the “anything too close to an end-to-end run is discouraged” condition, which makes it feel not that impressive. If we map the whole thing to AGI Ruin, it’s like “we should let the AGI FOOM, then cut the power just before it starts killing everyone”.)
Very engaging overall. I think this —
Why does this feel wrong… I guess I just don’t believe that the physics problems online are definitely talking about the thing I’m looking at.
— is a particularly important bit. When problem-solving in some novel domain/regime you don’t have experience in, it’s crucial to ensure you’re modeling the specific problem you’re dealing with, in as much explicit detail as possible, rather than sneaking-in heuristics/assumptions/cognitive shortcuts. After all, the latter, by definition, could have only been formed from experience in familiar domains — and therefore there’s no reason at all to think they apply anymore.
In AI Risk, it applies very broadly:
To figuring out whether AI Risk is real by considering the actual arguments, versus evaluating it based on vibes/credentials of those arguing for it/fancy yet invalid Outside View arguments/etc.
To predicting the future, and humanity’s future behavior in unprecedented situations (see Zvi’s recent post).
To modeling AIs. E. g., preferring black-boxy thinking to building mechanistic models of training-under-SGD is what leads to pitfalls like confusing reward for optimization target, or taking the “simulators” framing so literally you assume AGI-level generative world-models won’t plot to kill you.
I was in one of these workshops! I should’ve adjusted the cup to be closer because
our kinetic energy equations added terms to adjust for stuff, which generally meant lowing the amount of KE by the time the ball hit the ground, thus requiring a closer cup.
(completely ignores the point in favour of nitpicking)
Why is Robert’s final gut-instinct adjustment is to move the cup further away, rather than closer? One of the potential gotchas (= crucial mechanical details one may overlook) here seems like the following:
Pure ballistic calculations would give you the distance the ball would travel before hitting the ground under the assumption of an unobstructed fall. But for certain (trajectory, cup height) pairs, it may instead end up hitting the outer edge of the cup on the way down, and making the cup fall over/bouncing off it. I assume it wasn’t the problem in the given setup, but I see the potential for an instinctive last-minute correction based on intuiting that (and so wanting the cup a bit closer).
What’s the intuition for moving the cup further away, though? I don’t… really see what intuitable detail you can miss here that can lead to you overshooting.
(Also, Robert’s trick with reading launch speed off the video seems to ignore the “anything too close to an end-to-end run is discouraged” condition, which makes it feel not that impressive. If we map the whole thing to AGI Ruin, it’s like “we should let the AGI FOOM, then cut the power just before it starts killing everyone”.)
Very engaging overall. I think this —
— is a particularly important bit. When problem-solving in some novel domain/regime you don’t have experience in, it’s crucial to ensure you’re modeling the specific problem you’re dealing with, in as much explicit detail as possible, rather than sneaking-in heuristics/assumptions/cognitive shortcuts. After all, the latter, by definition, could have only been formed from experience in familiar domains — and therefore there’s no reason at all to think they apply anymore.
In AI Risk, it applies very broadly:
To figuring out whether AI Risk is real by considering the actual arguments, versus evaluating it based on vibes/credentials of those arguing for it/fancy yet invalid Outside View arguments/etc.
To predicting the future, and humanity’s future behavior in unprecedented situations (see Zvi’s recent post).
To modeling AIs. E. g., preferring black-boxy thinking to building mechanistic models of training-under-SGD is what leads to pitfalls like confusing reward for optimization target, or taking the “simulators” framing so literally you assume AGI-level generative world-models won’t plot to kill you.
Warning: some object-level details in the post have been intentionally modified or omitted to avoid too much in the way of spoilers.
I was in one of these workshops! I should’ve adjusted the cup to be closer because
our kinetic energy equations added terms to adjust for stuff, which generally meant lowing the amount of KE by the time the ball hit the ground, thus requiring a closer cup.