This comment consists solely of a different take* on the material of the OP, and contains no errors or corrections.
[*Difference not guaranteed, all footnotes are embedded, this comment is very long, ‘future additions, warnings and alterations to attributes such as epistemic status may or may not occur’, all...]
Contents:
Take 1
Take 2
Take 3
(The response to (parts of) each take is in three parts: a, b, and c. [This is the best part, so stop after there if you’re bored.])
Exercise
Questions that may overlap with ‘How to build an exo-brain?’
[I am not answering these questions. Don’t get your hopes down, bury them in the Himalayas. (This is an idiom variant, literal burial of physical objects in the Himalayas may be illegal.)]
Take 1:
a.
Sometimes “just checking” is infeasible to do at such a small scale.
Or what is feasible at small scale isn’t particularly usable, though large scale coordination could enable cheap experiments.
b.
When you find science insufficient for the task, applied rationality can help you make good decisions using information you already have.
I feel like this is re-defining what science is, to not include things that seem like they fall under it.
c.
Compressed into a single sentence, applied rationality fills the gaps of science in the pursuit of truth.
I might have called science [a] pursuit of truth, though distinguishing between different implementations/manifestations of it may be useful, like a group pursuing knowledge, versus an individual. (Though if they’re using similar/compatible formats, then possibly:
the individual can apply the current knowledge from the group, and the group’s experiments
A bunch of individuals performing experiments and publishing, can be the same as a group, only missing aggregation
An individual can merge data/knowledge from a group with their own. (Similar to how, with the right licence, open source programs may be borrowed from, and improved upon by companies internally, but without improving the original source or returning these versions to the ‘open’ pool.)
Take 2:
a.
Crucially, you have situationally bad advisors. When there is a tiger running at you at full speed, it is vital that you don’t consult your explicit reasoning advisor.
Crucially, you have ‘slow’ advisors, who can’t be consulted quickly. (And presumably fast advisors as well.)
While you may remember part of a book, or a skill you’ve gained, things/skills you don’t remember can’t be used with speed, even if you know where to find them given time
While it may be quick to determine if car is going to hit you while crossing a street, it may take longer to determine whether or not such a collision would kill you—longer than it would take the car to collide, or not collide, with you.
b.
I claim that similarly to the imaginary monarch making decisions, most of the work that goes into making good decisions is choosing
which sources of information to listen to. This problem is complicated by the fact that some sources of information are easier to query than others, but this problem is surmountable.
Most of the work that goes into making good decisions is choosing:
How long to make decisions, and
when to revisit them.*
which advisors to consult in that time
Managing the council. This can include:
Managing disagreements between council members
Changing the composition—firing councilors, hiring new ones (and seeing existing members grow, etc.)
*Including how long to take to make a decision. A problem which takes less time to resolve (to the desired degree) than expected is no issue, but a problem that takes longer may require revisiting how long should be spent on the problem (if it is important/revisiting how important it is).)
c.
Compressed into a single sentence, applied rationality is the skill of being able to select the proper sources of information during decision-making.
As phrased this addresses 2.b (above), though I’d stress both the short term and the long term.
Take 3:
You look at all the possible options
There are a lot options. This is why 2.b focused on time. Unfortunately the phrase “Optimal stopping” already seems to be taken, and refer to a very different (apparent) framing on the hiring problem. Even if you have all information on all applicants, you have to decide who to hire, and hire them before someone else does! (Which is what motivates deciding immediately after getting an applicant, in the more common framing. A hybrid approach might be better—have a favorite food, look at a few options, or create a record so results aren’t a waste.)
You decide that “rationality” is bunk and you should go with your intuition in the future.
This example might seem a bit contrived (and it is), but the general principle still holds.
So someone samples (tries) a thing once to determine if a method is good, but in applying the method doesn’t sample at all. Perhaps extracting general methods from existing advisors/across old and new potential advisors is the way to go.
If you think that being [X] doesn’t work because [Y], then [try Z : X taking Y into account].
Compressed into a single sentence, applied rationality is a system of heuristics/techniques/tricks/tools that helps you increase your values, with no particular restriction on what the heuristics/techniques/tricks/tools are allowed to be.
That is very different from how I thought this was going to go. Try anything*, see what works, while keeping constraints in mind. This seems like good advice (though long term and short term might be important to ‘balance’). The continuity assumption is interesting:
Don’t consider points (the system as it is), but adapt it to your needs/etc**.
*The approach from the example/story seems to revolve around having a council and trying out adding one new councilor at a time.
**The amount of time till the restaurant closes may be less than the time till you’ll be painfully hungry.
Exercise:
An exercise for the engaged reader is to find a friend and explain to them what applied rationality is to you.
I didn’t see this coming. I do see writing as something to practice, and examining others’ ideas “critically” is a start on
But I think what I’ve written above is a start for explaining what it means to me. Beyond that...
I might have a better explanation at the end of this “month”, these 30 days or so.
This topic also relates to a number of things:
A) A blog/book that’s being written about “meta-rationality”(/the practice/s of rationality/science (and studying it)): https://meaningness.com/eggplant
B) Questions that may overlap with ‘How to build an exo-brain?’
How to store information (paper is one answer. But what works best?)
How to process information*
How to organize information (ontology)
How to use information (like finding new applications)
*a) You learn that not all organisms are mortal. You learn that sharks are mortal.
How do you ensure that facts like these that are related to each other, are tracked with/linked to each other?
b) You “know” that everything is/sharks are mortal. Someone says “sharks are immortal”.
How do you ensure that contradictions are noticed, rather than both held, and how do you resolve them?
(Example based on one from the replacing guilt series/sequence, that illustrated a more general, and useful, point.)
Thinking about above, except with information replaced with other words like “questions” and “skills”.:
Q:
Storing questions may be similar to storing information.
But while information may keep, questions are clearly incomplete. (They’re looking for answers.)
Overlaps with above.
Which questions are important, and how can one ensure that the answers survive?*
Skills:
Practice (and growth)
It’s not clear that this is a thing, or if it is, how it works. (See posts on Unlocking the Emotional Brain.)
Seems like a question about neuroscience, or ‘how can you ‘store’ a skill you have now, so it’s easier to re-learn/get back to where you are now (on some part of it, or the whole)?’*
This seems more applicable to skills you don’t have, and deciding which new ones to acquire/focus on.
*This question is also important for after one’s lifetime. [Both in relation to other people “after (your) death”, and possible future de-cryo scenarios.]
This comment consists solely of a different take* on the material of the OP, and contains no errors or corrections.
[*Difference not guaranteed, all footnotes are embedded, this comment is very long, ‘future additions, warnings and alterations to attributes such as epistemic status may or may not occur’, all...]
Contents:
Take 1
Take 2
Take 3
(The response to (parts of) each take is in three parts: a, b, and c. [This is the best part, so stop after there if you’re bored.])
Exercise
Questions that may overlap with ‘How to build an exo-brain?’
[I am not answering these questions. Don’t get your hopes down, bury them in the Himalayas. (This is an idiom variant, literal burial of physical objects in the Himalayas may be illegal.)]
Take 1:
a.
Or what is feasible at small scale isn’t particularly usable, though large scale coordination could enable cheap experiments.
b.
I feel like this is re-defining what science is, to not include things that seem like they fall under it.
c.
I might have called science [a] pursuit of truth, though distinguishing between different implementations/manifestations of it may be useful, like a group pursuing knowledge, versus an individual. (Though if they’re using similar/compatible formats, then possibly:
the individual can apply the current knowledge from the group, and the group’s experiments
A bunch of individuals performing experiments and publishing, can be the same as a group, only missing aggregation
An individual can merge data/knowledge from a group with their own. (Similar to how, with the right licence, open source programs may be borrowed from, and improved upon by companies internally, but without improving the original source or returning these versions to the ‘open’ pool.)
Take 2:
a.
Crucially, you have ‘slow’ advisors, who can’t be consulted quickly. (And presumably fast advisors as well.)
While you may remember part of a book, or a skill you’ve gained, things/skills you don’t remember can’t be used with speed, even if you know where to find them given time
While it may be quick to determine if car is going to hit you while crossing a street, it may take longer to determine whether or not such a collision would kill you—longer than it would take the car to collide, or not collide, with you.
b.
Most of the work that goes into making good decisions is choosing:
How long to make decisions, and
when to revisit them.*
which advisors to consult in that time
Managing the council. This can include:
Managing disagreements between council members
Changing the composition—firing councilors, hiring new ones (and seeing existing members grow, etc.)
*Including how long to take to make a decision. A problem which takes less time to resolve (to the desired degree) than expected is no issue, but a problem that takes longer may require revisiting how long should be spent on the problem (if it is important/revisiting how important it is).)
c.
As phrased this addresses 2.b (above), though I’d stress both the short term and the long term.
Take 3:
There are a lot options. This is why 2.b focused on time. Unfortunately the phrase “Optimal stopping” already seems to be taken, and refer to a very different (apparent) framing on the hiring problem. Even if you have all information on all applicants, you have to decide who to hire, and hire them before someone else does! (Which is what motivates deciding immediately after getting an applicant, in the more common framing. A hybrid approach might be better—have a favorite food, look at a few options, or create a record so results aren’t a waste.)
So someone samples (tries) a thing once to determine if a method is good, but in applying the method doesn’t sample at all. Perhaps extracting general methods from existing advisors/across old and new potential advisors is the way to go.
That is very different from how I thought this was going to go. Try anything*, see what works, while keeping constraints in mind. This seems like good advice (though long term and short term might be important to ‘balance’). The continuity assumption is interesting:
Don’t consider points (the system as it is), but adapt it to your needs/etc**.
*The approach from the example/story seems to revolve around having a council and trying out adding one new councilor at a time.
**The amount of time till the restaurant closes may be less than the time till you’ll be painfully hungry.
Exercise:
I didn’t see this coming. I do see writing as something to practice, and examining others’ ideas “critically” is a start on
But I think what I’ve written above is a start for explaining what it means to me. Beyond that...
I might have a better explanation at the end of this “month”, these 30 days or so.
This topic also relates to a number of things:
A) A blog/book that’s being written about “meta-rationality”(/the practice/s of rationality/science (and studying it)): https://meaningness.com/eggplant
B) Questions that may overlap with ‘How to build an exo-brain?’
How to store information (paper is one answer. But what works best?)
How to process information*
How to organize information (ontology)
How to use information (like finding new applications)
*a) You learn that not all organisms are mortal. You learn that sharks are mortal.
How do you ensure that facts like these that are related to each other, are tracked with/linked to each other?
b) You “know” that everything is/sharks are mortal. Someone says “sharks are immortal”.
How do you ensure that contradictions are noticed, rather than both held, and how do you resolve them?
(Example based on one from the replacing guilt series/sequence, that illustrated a more general, and useful, point.)
Thinking about above, except with information replaced with other words like “questions” and “skills”.:
Q:
Storing questions may be similar to storing information.
But while information may keep, questions are clearly incomplete. (They’re looking for answers.)
Overlaps with above.
Which questions are important, and how can one ensure that the answers survive?*
Skills:
Practice (and growth)
It’s not clear that this is a thing, or if it is, how it works. (See posts on Unlocking the Emotional Brain.)
Seems like a question about neuroscience, or ‘how can you ‘store’ a skill you have now, so it’s easier to re-learn/get back to where you are now (on some part of it, or the whole)?’*
This seems more applicable to skills you don’t have, and deciding which new ones to acquire/focus on.
*This question is also important for after one’s lifetime. [Both in relation to other people “after (your) death”, and possible future de-cryo scenarios.]