Scientific progress across a wide variety of fields is primarily bottlenecked on the lack of a general theory of adaptive systems (i.e. embedded agency)
Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
Personally, my own relative advantage in solving technical problems increases with difficulty of the problem across a wide variety of domains
A few sub-big-ideas:
When we don’t understand something, it doesn’t look confusing—it looks like random noise
Cognitive tools, their relation to declarative mathematics/programming, and the importance of increasing the range of things we consider “easy”
Everything is causality plus symmetry (everything is computation and computation is causality plus symmetry)
Cox’ Theorem as a unifying explanation for different interpretations of probability, and a tool for recognizing situations where probability-isomorphic computation is likely to show up. Also, by extension, the idea that there are a lot of interpretations of probability, most of which are not yet discovered.
Solving the coordination problem at scale seems related to my musing (though not new as there is a large literature) about firms and particularly large corporation. Many big corporation seem more suitable to modeling as markets themselves rather than market participants. That seems like it will have significant implications for both standard economic modeling and policy analysis. Kind of goes back to Coase’s old article The Nature of the Firm.
Given the availability of technology, and how that technology should (and has) reduced costs, why are more developing countries still “developing”? How much of that might be driven more by culture than by cost, access to trade partners, investments, financing or a number of other standard economic explanations?
What we don’t understand looks like random noise: Perfect encryption should also look exactly like random noise. Is that perhaps why it seems the universe is so empty of other intelligent life. Clearly there are other explanations for why we might not be able to identify such signals (e.g., syntax, grammar and encoding so alien we are unable to see a pattern, perhaps signal pollution and interference due to all the electromagnetic sources in the univers) but how could we differentiate?
I would say that perfect encryption is a great example of something we don’t understand which looks like noise: at first it looks totally random, but if someone hands you a short key, suddenly it becomes obvious that the “noise” is highly systematic. That’s understanding. The problem is that achieving understanding is not always computationally tractable.
>Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
Upstream: setting the ontology that allows interoperability aka computer interface design = largest companies in the world. Hell, you can throw a GUI on IRC and get billions of dollars. That’s how early in the game things are.
The big three:
Scientific progress across a wide variety of fields is primarily bottlenecked on the lack of a general theory of adaptive systems (i.e. embedded agency)
Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
Personally, my own relative advantage in solving technical problems increases with difficulty of the problem across a wide variety of domains
A few sub-big-ideas:
When we don’t understand something, it doesn’t look confusing—it looks like random noise
If our main goal is to build gears-level models, then the central problem of probability/statistics is not predicting future data but rather model testing/comparison
Everyday problem-solving is extremely high dimensional and subject to the 80⁄20 rule, so we need methods which outperform babble-and-prune: constraints and slackness, chunking/abstraction, asymptotics and symmetry, etc
Cognitive tools, their relation to declarative mathematics/programming, and the importance of increasing the range of things we consider “easy”
Everything is causality plus symmetry (everything is computation and computation is causality plus symmetry)
Cox’ Theorem as a unifying explanation for different interpretations of probability, and a tool for recognizing situations where probability-isomorphic computation is likely to show up. Also, by extension, the idea that there are a lot of interpretations of probability, most of which are not yet discovered.
Regarding economic progress:
Solving the coordination problem at scale seems related to my musing (though not new as there is a large literature) about firms and particularly large corporation. Many big corporation seem more suitable to modeling as markets themselves rather than market participants. That seems like it will have significant implications for both standard economic modeling and policy analysis. Kind of goes back to Coase’s old article The Nature of the Firm.
Given the availability of technology, and how that technology should (and has) reduced costs, why are more developing countries still “developing”? How much of that might be driven more by culture than by cost, access to trade partners, investments, financing or a number of other standard economic explanations?
What we don’t understand looks like random noise: Perfect encryption should also look exactly like random noise. Is that perhaps why it seems the universe is so empty of other intelligent life. Clearly there are other explanations for why we might not be able to identify such signals (e.g., syntax, grammar and encoding so alien we are unable to see a pattern, perhaps signal pollution and interference due to all the electromagnetic sources in the univers) but how could we differentiate?
I would say that perfect encryption is a great example of something we don’t understand which looks like noise: at first it looks totally random, but if someone hands you a short key, suddenly it becomes obvious that the “noise” is highly systematic. That’s understanding. The problem is that achieving understanding is not always computationally tractable.
>Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
Upstream: setting the ontology that allows interoperability aka computer interface design = largest companies in the world. Hell, you can throw a GUI on IRC and get billions of dollars. That’s how early in the game things are.
Have you read any of Cosma Shalizi’s stuff on computational mechanics? Seems very related to your interests.
I had not seen that, thank you.