You mean like a literature review, but aimed at people entirely new to the field? If so, Yes. If not, probably also yes, but I’ll hold off on committing until I understand what I’m committing to.
instrumental rationality
No. Just kidding, of course it’s a Yes.
Personally, I think that changing the world is a multi-armed bandit problem, and that EA has been overly narrow in the explore/exploit tradeoff, in part due to the importance/tractablness/neglectedness heuristic. (And I can translate that sentence into english if the jargon is a bit much.)
I would like to see LW explore science, philosophy, and the world with an eye toward uncovering new things which are potentially big and important. (Hence, I’m a fan of Future of Humanity Institute, Foundational Research Institute, Principia Qualia, etc.) I suspect that in the next couple decades, we are likely to uncover multiple things as important or more important than AI takeoff scenarios, and the more we uncover the better.
Within the topics you mentioned, I’m particularly curious about:
Mathematics: mathematical infinities and whether infinite utility might be attainable in our universe.
Physics: condensed matter physics and entropy, with an eye toward engineering materials which will survive long into the heat death of the universe.
Computer science: Everything seems to be built on simple binary Boolean logic, but obviously DNA uses base 4. (There are 4 base pairs.) So, I’m particularly interested in base 3 logic, many-valued logic, fuzzy logic, etc. I suspect these may have applications to quantum computers or novel architectures, where physics doesn’t like to give you simple Boolean operators, but more complex operators are easier to implement, if not to understand.
Yes, yes, and yes.
You mean like a literature review, but aimed at people entirely new to the field? If so, Yes. If not, probably also yes, but I’ll hold off on committing until I understand what I’m committing to.
No. Just kidding, of course it’s a Yes.
Personally, I think that changing the world is a multi-armed bandit problem, and that EA has been overly narrow in the explore/exploit tradeoff, in part due to the importance/tractablness/neglectedness heuristic. (And I can translate that sentence into english if the jargon is a bit much.)
I would like to see LW explore science, philosophy, and the world with an eye toward uncovering new things which are potentially big and important. (Hence, I’m a fan of Future of Humanity Institute, Foundational Research Institute, Principia Qualia, etc.) I suspect that in the next couple decades, we are likely to uncover multiple things as important or more important than AI takeoff scenarios, and the more we uncover the better.
Within the topics you mentioned, I’m particularly curious about:
Mathematics: mathematical infinities and whether infinite utility might be attainable in our universe.
Physics: condensed matter physics and entropy, with an eye toward engineering materials which will survive long into the heat death of the universe.
Computer science: Everything seems to be built on simple binary Boolean logic, but obviously DNA uses base 4. (There are 4 base pairs.) So, I’m particularly interested in base 3 logic, many-valued logic, fuzzy logic, etc. I suspect these may have applications to quantum computers or novel architectures, where physics doesn’t like to give you simple Boolean operators, but more complex operators are easier to implement, if not to understand.