we urgently need to distill huge amounts of educational content. I don’t know with what weapons sequences 2 will be fought, but sequences 3 will be fought with knowledge tracing, machine teaching, online courses like brilliant, inline exercises, play money prediction markets, etc.
the first time around, it was limited to eliezer’s knowledge—and he made severe mistakes because he didn’t see neural networks coming. now it almost seems like we need to write an intro to epistemics for a wide variety of audiences, including AIs—it’s time to actually write clearly enough to raise the sanity waterline.
I think that current ai aided clarification tools are approaching maturity levels necessary to do this. compressing human knowledge enough that it results in coherent, concise description that a wide variety of people will intuitively understand is a very tall order, and inevitably some of the material will need to be presented in a different order; it’s very hard to learn the shapes of mathematical objects without grounding them in the physical reality they’re derived from with geometry and then showing how everything turns into linear algebra, causal epistemics, predictive grounding, category theory. etc. the task of building a brain in your brain requires a manual for how to be the best kind of intelligence we know how to build, and it has been my view for years that what we need to do is distill these difficult topics into tools for interactively exploring the mathematical space of implications.
Could lesswrong have online MOOC participation groups? I’d certainly join one if it was designed to be adhd goofball friendly.
I don’t know what you should build next, but I know we should be looking for to a future where the process is managed by ai.
AI is eating the world, fast. many of us think the rate-of-idea-generation physical limit will be hit any year now. I don’t think it makes sense to focus less on ai, but it would help a lot to build tools for clarifying thoughts. What if the editor had an abstractive summarizer button, that would help you write more concisely? a common complaint I hear about this site is that people write too much, and as a person who writes too much, I sure would love if I could get a machine’s help deciding which words are unnecessary.
Ultimately, there is no path to rationality that does not walk through the structure of intelligent systems.
we urgently need to distill huge amounts of educational content. I don’t know with what weapons sequences 2 will be fought, but sequences 3 will be fought with knowledge tracing, machine teaching, online courses like brilliant, inline exercises, play money prediction markets, etc.
the first time around, it was limited to eliezer’s knowledge—and he made severe mistakes because he didn’t see neural networks coming. now it almost seems like we need to write an intro to epistemics for a wide variety of audiences, including AIs—it’s time to actually write clearly enough to raise the sanity waterline.
I think that current ai aided clarification tools are approaching maturity levels necessary to do this. compressing human knowledge enough that it results in coherent, concise description that a wide variety of people will intuitively understand is a very tall order, and inevitably some of the material will need to be presented in a different order; it’s very hard to learn the shapes of mathematical objects without grounding them in the physical reality they’re derived from with geometry and then showing how everything turns into linear algebra, causal epistemics, predictive grounding, category theory. etc. the task of building a brain in your brain requires a manual for how to be the best kind of intelligence we know how to build, and it has been my view for years that what we need to do is distill these difficult topics into tools for interactively exploring the mathematical space of implications.
Could lesswrong have online MOOC participation groups? I’d certainly join one if it was designed to be adhd goofball friendly.
I don’t know what you should build next, but I know we should be looking for to a future where the process is managed by ai.
AI is eating the world, fast. many of us think the rate-of-idea-generation physical limit will be hit any year now. I don’t think it makes sense to focus less on ai, but it would help a lot to build tools for clarifying thoughts. What if the editor had an abstractive summarizer button, that would help you write more concisely? a common complaint I hear about this site is that people write too much, and as a person who writes too much, I sure would love if I could get a machine’s help deciding which words are unnecessary.
Ultimately, there is no path to rationality that does not walk through the structure of intelligent systems.
https://www.lesswrong.com/posts/ryx4WseB5bEm65DWB/six-months-of-rose is essentially something like that.