I’ve curated this post. Here’s the reasons that were salient to me:
The post clearly communicated (to me) some subtle intuitions about what useful research looks like when you’re deeply confused about basic principles.
The question of “How would I invent calculus if I didn’t know what calculus was and I was trying to build a plane” and finding some very specific and basic problems that cut to the heart of what we’d be confused about in that situation like “how to fire a cannonball such that it forever orbits the earth”, point to a particular type of thinking that I can see being applied to understanding intelligence, and potentially coming up with incredibly valuable insights.
I had previously been much more confused about why folks around here have been asking many questions about logical uncertainty and such, and this makes it much clearer to me.
As usual, well-written dialogues like this are very easy and fun to read (which for me trades off massively with the length of the post).
Quick thoughts on further work that could be useful:
The post gives me a category of ‘is missing some fundamental math like calculus’ that applies to rocket alignment and AI alignment. I would be interested in some examples of how to look at a problem and write down a simple story of how it works, including
some positive and negative examples—situations where it works, situations where it doesn’t
what steps in particular seem to fail on the latter
what signs say that the mathshould exist (e.g. some things are just complicated with few simple models predicting it—I expect a bunch of microbiology looks more like this than, say, physics).
I would also be interested to read historical case studies of this thing happening and what work lead to progress here—Newton, Shannon, others.
I’ve curated this post. Here’s the reasons that were salient to me:
The post clearly communicated (to me) some subtle intuitions about what useful research looks like when you’re deeply confused about basic principles.
The question of “How would I invent calculus if I didn’t know what calculus was and I was trying to build a plane” and finding some very specific and basic problems that cut to the heart of what we’d be confused about in that situation like “how to fire a cannonball such that it forever orbits the earth”, point to a particular type of thinking that I can see being applied to understanding intelligence, and potentially coming up with incredibly valuable insights.
I had previously been much more confused about why folks around here have been asking many questions about logical uncertainty and such, and this makes it much clearer to me.
As usual, well-written dialogues like this are very easy and fun to read (which for me trades off massively with the length of the post).
Quick thoughts on further work that could be useful:
The post gives me a category of ‘is missing some fundamental math like calculus’ that applies to rocket alignment and AI alignment. I would be interested in some examples of how to look at a problem and write down a simple story of how it works, including
some positive and negative examples—situations where it works, situations where it doesn’t
what steps in particular seem to fail on the latter
what signs say that the math should exist (e.g. some things are just complicated with few simple models predicting it—I expect a bunch of microbiology looks more like this than, say, physics).
I would also be interested to read historical case studies of this thing happening and what work lead to progress here—Newton, Shannon, others.