[Note: I probably like explicit reasoning a lot more than most people, so keep that in mind.]
“our naive intuitions about what a large group of humans can do become less and less applicable, so we need to use explicit reasoning and/or develop new intuitions about what HBO-based and especially LBO-based IDA is ultimately capable of.”
One great things about explicit reasoning is that it does seem easier to reason/develop intuitions about than human decisions made in more intuitive ways (to me). It’s different, but I assume far more predictable. I think it could be worth outlining what the most obvious forms of explicit reasoning that will happen are at some point (I’ve been thinking about this.)
To me it seems like there are two scenarios:
1. Group explicit reasoning cannot perform as well as longer individual intuitive reasoning, even with incredibly large budgets.
2. Group explicit reasoning outperforms other reasoning.
I believe that if #1 is one of the main cruxes towards the competitiveness and general viability of IDA. If it’s not true, then the whole scheme has some serious problems.
But in the case of #2, it seems like things are overall in a much better place. The system may cost a lot (until it can be distilled enough), but it could be easier to reason about, easier to predict, and able to produce superior results on essentially all axis.
[Note: I probably like explicit reasoning a lot more than most people, so keep that in mind.]
Great! We need more people like you to help drive this forward. For example I think we desperately need explicit, worked out examples of meta-execution (see my request to Paul here) but Paul seems too busy to do that (the reply he gave wasn’t really at the level of detail and completeness that I was hoping for) and I find it hard to motivate myself to do it because I expect that I’ll get stuck at some point and it will be hard to tell if it’s because I didn’t try hard enough, or don’t have the requisite skills, or made a wrong decision several branches up, or if it’s just impossible.
One great things about explicit reasoning is that it does seem easier to reason/develop intuitions about than human decisions made in more intuitive ways (to me).
That’s an interesting perspective. I hope you’re right. :)
After some more reflection on this, I’m now more in the middle. I’ve come to think more and more that these tasks will be absolutely tiny; and if so, they will need insane amounts of structure. Like, the system may need to effectively create incredibly advanced theories of expected value, and many of these many be near incomprehensible to us humans, perhaps even with a very large amount of explanation.
I intend to spend some time writing more of how I would expect a system like this to work in practice.
[Note: I probably like explicit reasoning a lot more than most people, so keep that in mind.]
“our naive intuitions about what a large group of humans can do become less and less applicable, so we need to use explicit reasoning and/or develop new intuitions about what HBO-based and especially LBO-based IDA is ultimately capable of.”
One great things about explicit reasoning is that it does seem easier to reason/develop intuitions about than human decisions made in more intuitive ways (to me). It’s different, but I assume far more predictable. I think it could be worth outlining what the most obvious forms of explicit reasoning that will happen are at some point (I’ve been thinking about this.)
To me it seems like there are two scenarios:
1. Group explicit reasoning cannot perform as well as longer individual intuitive reasoning, even with incredibly large budgets.
2. Group explicit reasoning outperforms other reasoning.
I believe that if #1 is one of the main cruxes towards the competitiveness and general viability of IDA. If it’s not true, then the whole scheme has some serious problems.
But in the case of #2, it seems like things are overall in a much better place. The system may cost a lot (until it can be distilled enough), but it could be easier to reason about, easier to predict, and able to produce superior results on essentially all axis.
Great! We need more people like you to help drive this forward. For example I think we desperately need explicit, worked out examples of meta-execution (see my request to Paul here) but Paul seems too busy to do that (the reply he gave wasn’t really at the level of detail and completeness that I was hoping for) and I find it hard to motivate myself to do it because I expect that I’ll get stuck at some point and it will be hard to tell if it’s because I didn’t try hard enough, or don’t have the requisite skills, or made a wrong decision several branches up, or if it’s just impossible.
That’s an interesting perspective. I hope you’re right. :)
After some more reflection on this, I’m now more in the middle. I’ve come to think more and more that these tasks will be absolutely tiny; and if so, they will need insane amounts of structure. Like, the system may need to effectively create incredibly advanced theories of expected value, and many of these many be near incomprehensible to us humans, perhaps even with a very large amount of explanation.
I intend to spend some time writing more of how I would expect a system like this to work in practice.