I’ve transcribed it approximately here (with some styling and small corrections to make it easier to read).
“There are many things, especially kind of modern things that we need to talk about nowadays which are not well-represented in spoken language.
One of those is systems. We live in an era of systems:
Natural systems:
Environment
Ecosystems
Biological systems
Pathological systems
etc
Systems that we make:
Political
Economic
Infrastructural systems
Things we make out of concrete, metal, electronics.
etc
The wrong way to understand a system is to talk about it, to describe it.
The right way to understand a system is to get in there, model it and explore it. You can’t do that in words.
What we have is that people are using very old tools, explaining and convincing through reasoning and rhetoric instead of these newer tools of evidence and explorable models. We want a medium that supports that.”
A quick sketch of how the RAIN Framework approximately maps to Bret’s model of a good communication medium
Rigor: Quick sketch, not exhaustive. To start a conversation.
Epistemic Status: Moderate. I think Bret Victor has a lot of good insights, but I haven’t done an extensive research to see if the cognitive science research supports his claims.
Explorable Models
Accessibility
Availability
No big difference to text, most texts are accessible these days and explorable models can be too, on the internet. However, making native high-performance explorable models is trickier with software if you want it to work on all platforms. WebAssembly could improve this in the future, hopefully.
Understandability
Well-designed explorable models of systems could give some insights much faster than reading about the systems could. E.g. innovation of mathematical notation was a big step forward in expressing some things more effectively than words previously could, although with a steep learning curve for many people. However, I could imagine badly designed explorable models that wouldn’t effectively guide you toward specific insights either, so it’s important to compare high-quality texts with high-quality explorable models.
Compactness
Explorable models can be dynamic, so they could e.g. be personalized to specific audiences. Instead of writing multiple texts for multiple audiences, one model could be made that could adapt on parameters to fit different audiences. Personalization could tie insights better to your current needs and motivations and make you more likely to pursue learning about challenging but valuable information.
Enjoyability
Explorable models could engage more senses than only the symbolic visual. Personalization probably increases enjoyability too.
Evidence
Robustness
Could include system features to provide verifiability, more openness on bias & noise, and e.g. making models open source for scrutinization.
Explorable models could include being able to explore the data and source, not only the output program.
Importance
If we had explorable models of moral uncertainty which brought together a diversity of intrinsic values representing the cultures of the world, then (to my knowledge) that is the closest approach we have to find “evidence” of what is most important.
Effective Altruism organizations like e.g. Global Priorities Institute could use explorable models to make their information & evidence more accessible, easier to give feedback to, and then improve further in their judgment on what is most important.
I’m curious what you think about Bret Victor’s claims. How big could the effect be if people used explorable models more? The technology to make it cheap to author these kinds of models isn’t here yet, so if we consider tradeoffs then writing might still be a better option. I’m personally more excited about skill-building to make explorable models than I am about perfecting my writing, but maybe I’m overestimating the value. I used to read a lot on Less Wrong but these days I often find it hard to choose which articles are worth my time reading, perhaps a lot based on the enjoyability and compactness factors. But maybe I’m letting my vision of how good information intake could be irrationally demotivates me to read and write texts in the LessWrong norm format.
This framework reminded me of this quote from Bret Victor’s talk “The Humane Representation of Thought” (timestamp included in link)
I’ve transcribed it approximately here (with some styling and small corrections to make it easier to read).
“There are many things, especially kind of modern things that we need to talk about nowadays which are not well-represented in spoken language.
One of those is systems. We live in an era of systems:
Natural systems:
Environment
Ecosystems
Biological systems
Pathological systems
etc
Systems that we make:
Political
Economic
Infrastructural systems
Things we make out of concrete, metal, electronics.
etc
The wrong way to understand a system is to talk about it, to describe it.
The right way to understand a system is to get in there, model it and explore it. You can’t do that in words.
What we have is that people are using very old tools, explaining and convincing through reasoning and rhetoric instead of these newer tools of evidence and explorable models. We want a medium that supports that.”
A quick sketch of how the RAIN Framework approximately maps to Bret’s model of a good communication medium
Rigor: Quick sketch, not exhaustive. To start a conversation.
Epistemic Status: Moderate. I think Bret Victor has a lot of good insights, but I haven’t done an extensive research to see if the cognitive science research supports his claims.
Explorable Models
Accessibility
Availability
No big difference to text, most texts are accessible these days and explorable models can be too, on the internet. However, making native high-performance explorable models is trickier with software if you want it to work on all platforms. WebAssembly could improve this in the future, hopefully.
Understandability
Well-designed explorable models of systems could give some insights much faster than reading about the systems could. E.g. innovation of mathematical notation was a big step forward in expressing some things more effectively than words previously could, although with a steep learning curve for many people. However, I could imagine badly designed explorable models that wouldn’t effectively guide you toward specific insights either, so it’s important to compare high-quality texts with high-quality explorable models.
Compactness
Explorable models can be dynamic, so they could e.g. be personalized to specific audiences. Instead of writing multiple texts for multiple audiences, one model could be made that could adapt on parameters to fit different audiences. Personalization could tie insights better to your current needs and motivations and make you more likely to pursue learning about challenging but valuable information.
Enjoyability
Explorable models could engage more senses than only the symbolic visual. Personalization probably increases enjoyability too.
Evidence
Robustness
Could include system features to provide verifiability, more openness on bias & noise, and e.g. making models open source for scrutinization.
Explorable models could include being able to explore the data and source, not only the output program.
Importance
If we had explorable models of moral uncertainty which brought together a diversity of intrinsic values representing the cultures of the world, then (to my knowledge) that is the closest approach we have to find “evidence” of what is most important.
Effective Altruism organizations like e.g. Global Priorities Institute could use explorable models to make their information & evidence more accessible, easier to give feedback to, and then improve further in their judgment on what is most important.
I’m curious what you think about Bret Victor’s claims. How big could the effect be if people used explorable models more? The technology to make it cheap to author these kinds of models isn’t here yet, so if we consider tradeoffs then writing might still be a better option. I’m personally more excited about skill-building to make explorable models than I am about perfecting my writing, but maybe I’m overestimating the value. I used to read a lot on Less Wrong but these days I often find it hard to choose which articles are worth my time reading, perhaps a lot based on the enjoyability and compactness factors. But maybe I’m letting my vision of how good information intake could be irrationally demotivates me to read and write texts in the LessWrong norm format.