It doesn’t take a formal probability trance to chart a path through everyday life—it was in following the results
Couldn’t agree more. Execution is crucial.
I can come out of a probability trance with a perfect plan, an ideal path of least resistance through the space of possible worlds, but now I have to trick, bribe or force my messy, kludgy, evolved brain into actually executing the plan.
A recent story from my experience. I had (and still have) a plan involving a relatively large chunk of of work, around a full-time month. Nothing challenging, just ‘sit down and do it’ sort of thing. But for some reason my brain is unable to see how this chunk of work will benefit my genes, so it just switches into a procrastination mode when exposed to this work. I tried to force myself to do it, but now I get an absolutely real feeling of ‘mental nausea’ every time I approach this task – yes, I literally want to hurl when I think about it.
For a non-evolved being, say an intelligently-designed robot, the execution part would be a non-issue – it gets a plan, it executes it as perfectly as it can, give or take some engineering inefficiencies. But for an evolved being trying to be rational, it’s an entirely different story.
If one had public metrics of success at rationality, the usual status seeking and embarrassment avoidance could encourage people to actually apply their skills.
Shouldn’t a common-sense ‘success at life’ (money, status, free time, whatever) be the real metric of success at rationality? Shouldn’t a rationalist, as a General Inteligence, succeed over a non-rationalist in any chosen orderly environment, according to any chosen metric of success—including common metrics of that environment?
If “general intelligence” is a binary classification, almost everyone is one. If it’s continuous, rationalist and non-rationalist humans are indistinguishable next to AIXI.
You don’t know what the rationalist is optimizing for. Rationalists may even be less likely to value common-sense success metrics.
Even if those are someone’s goals, growth in rationality involves tradeoffs—investment of time, if nothing else—in the short term, but that may still be a long time.
Heck, if “rationality” is defined as anything other than “winning”, it might just not win for common-sense goals in some realistic environments.
People with the disposition to become rationalists may tend to also not be as naturally good at some things, like gaining status.
Agreed. Let’s throw away the phrase about General Intelligence—it’s not needed there.
Obviously, if we’re measuring one’s reality-steering performance we must know the target region (and perhaps some other parameters like planned time expenditure etc.) in advance.
The measurement should measure the performance of a rationalist at his/her current level, not taking into account time and resources he/she spent to level up. Measuring ‘the speed or efficiency of leveling-up in rationality’ is a different measurement.
The definitions at the beginning of the original post will do.
On one hand, the reality-mapping and reality-steering abilities should work for any activity, no matter whether the performer is hardware-accelerated for that activity or not. On the other hand, we should somehow take this into account—after all, excelling at things one is not hardware-accelerated for is a good indicator. (If only we could reliably determine who is hardware-accelerated for what).
(Edit: cool, it does numeric lists automatically!)
Public metrics aren’t enough—society must also care about them. Without that, there’s no status attached and no embarrassment risked.
To get this going, you’d also need a way to keep society’s standards on-track, or even a small amount of noise would lead to a positive feedback loop disrupting its conception of rationality.
Everyone has at least a little bit of rationality. Why not simply apply yourself to increasing it, and finding ways to make yourself implement its conclusions?
Just sit under the bodhi tree and decide not to move away until you’re better at implementing.
An idea on how to make the execution part trivial – a rational planner should treat his own execution module as a part of the external environment, not as a part of ‘himself’. This approach will produce plans that take into account the inefficiencies of one’s execution module and plan around them.
I hope you realize this is potentially recursive, if this ‘execution module’ happens to be instrumental to rationality. Not that that’s necessarily a bad thing.
What if first, you just calculate the most beneficial actions you can take (like Scott did), and after that asses each of those using something like piers steel’s procrastination equation? then you know which one you’re most likely to achieve, and can can choose more wisely.
also, doing the easiest first can sometimes be a good strategy to achieve all of them, steel calls it a success spiral, where you succeed time after time and it increases your motivation.
Well, ideally one considers the whole of themselves when doing the calculations, but it does make the calculations tricky.
And that still doesn’t answer exactly how to take it into account. ie, “okay, I need to take into account the properties of my execution module, find ways to actually get it to do stuff. How?”
Couldn’t agree more. Execution is crucial.
I can come out of a probability trance with a perfect plan, an ideal path of least resistance through the space of possible worlds, but now I have to trick, bribe or force my messy, kludgy, evolved brain into actually executing the plan.
A recent story from my experience. I had (and still have) a plan involving a relatively large chunk of of work, around a full-time month. Nothing challenging, just ‘sit down and do it’ sort of thing. But for some reason my brain is unable to see how this chunk of work will benefit my genes, so it just switches into a procrastination mode when exposed to this work. I tried to force myself to do it, but now I get an absolutely real feeling of ‘mental nausea’ every time I approach this task – yes, I literally want to hurl when I think about it.
For a non-evolved being, say an intelligently-designed robot, the execution part would be a non-issue – it gets a plan, it executes it as perfectly as it can, give or take some engineering inefficiencies. But for an evolved being trying to be rational, it’s an entirely different story.
If one had public metrics of success at rationality, the usual status seeking and embarrassment avoidance could encourage people to actually apply their skills.
Shouldn’t a common-sense ‘success at life’ (money, status, free time, whatever) be the real metric of success at rationality? Shouldn’t a rationalist, as a General Inteligence, succeed over a non-rationalist in any chosen orderly environment, according to any chosen metric of success—including common metrics of that environment?
No.
If “general intelligence” is a binary classification, almost everyone is one. If it’s continuous, rationalist and non-rationalist humans are indistinguishable next to AIXI.
You don’t know what the rationalist is optimizing for. Rationalists may even be less likely to value common-sense success metrics.
Even if those are someone’s goals, growth in rationality involves tradeoffs—investment of time, if nothing else—in the short term, but that may still be a long time.
Heck, if “rationality” is defined as anything other than “winning”, it might just not win for common-sense goals in some realistic environments.
People with the disposition to become rationalists may tend to also not be as naturally good at some things, like gaining status.
Point-by-point:
Agreed. Let’s throw away the phrase about General Intelligence—it’s not needed there.
Obviously, if we’re measuring one’s reality-steering performance we must know the target region (and perhaps some other parameters like planned time expenditure etc.) in advance.
The measurement should measure the performance of a rationalist at his/her current level, not taking into account time and resources he/she spent to level up. Measuring ‘the speed or efficiency of leveling-up in rationality’ is a different measurement.
The definitions at the beginning of the original post will do.
On one hand, the reality-mapping and reality-steering abilities should work for any activity, no matter whether the performer is hardware-accelerated for that activity or not. On the other hand, we should somehow take this into account—after all, excelling at things one is not hardware-accelerated for is a good indicator. (If only we could reliably determine who is hardware-accelerated for what).
(Edit: cool, it does numeric lists automatically!)
Public metrics aren’t enough—society must also care about them. Without that, there’s no status attached and no embarrassment risked.
To get this going, you’d also need a way to keep society’s standards on-track, or even a small amount of noise would lead to a positive feedback loop disrupting its conception of rationality.
Everyone has at least a little bit of rationality. Why not simply apply yourself to increasing it, and finding ways to make yourself implement its conclusions?
Just sit under the bodhi tree and decide not to move away until you’re better at implementing.
An idea on how to make the execution part trivial – a rational planner should treat his own execution module as a part of the external environment, not as a part of ‘himself’. This approach will produce plans that take into account the inefficiencies of one’s execution module and plan around them.
I hope you realize this is potentially recursive, if this ‘execution module’ happens to be instrumental to rationality. Not that that’s necessarily a bad thing.
No, I don’t (yet) -- could you please elaborate on this?
Funny how this got rerun on the same day as EY posted about progress on Löb’s problem.
What if first, you just calculate the most beneficial actions you can take (like Scott did), and after that asses each of those using something like piers steel’s procrastination equation? then you know which one you’re most likely to achieve, and can can choose more wisely.
also, doing the easiest first can sometimes be a good strategy to achieve all of them, steel calls it a success spiral, where you succeed time after time and it increases your motivation.
Well, ideally one considers the whole of themselves when doing the calculations, but it does make the calculations tricky.
And that still doesn’t answer exactly how to take it into account. ie, “okay, I need to take into account the properties of my execution module, find ways to actually get it to do stuff. How?”
However, treating the execution module as external and fixed may demotivate attempts to improve it.
(Related: Chaotic Inversion)