While I generally agree with you, I don’t think growth mindset actually works, or at least is wildly misleading in what it can do.
Re the issue where talking about the probability of success is the wrong question to ask, I think the key question here is how undetermined the probability of success is, or how much it’s conditional on your own actions.
That’s a great point, but aren’t you saying the same thing in disguise?
To me you’re both saying « The map is not the territory. ».
If I map a question about nutrition using a frame known useful for thermodynamics, I’ll make mistakes even if rational (because I’d fail to ask the right questions). But « asking the right question » is something I count as « my own actions, potentially », so you could totally reword that as « success is conditional on my own decision to stop gathering the thermodynamic frame for thinking about nutrition ».
Also I’d say that what you can call « growth mindset » (as anything for mental health, especially placebos) can sometime help you. And by « you » I mean « me », of course. 😉
In general, I think that a wrong frame can only slow you down, not stop you. The catch is that the slowdown can be arbitrarily bad, which is the biggest problem here.
You can use a logic/math frame to answer a whole lot of questions, but the general case is exponentially slow the more variables you add, and that’s in a bounded logic/math frame, with relatively simplistic logics. Any more complexity and we immediately run into ever more intractability, and this goes on for a long time.
I’d say the more appropriate saying is that the map is equivalent to the territory, but computing the map is in the general case completely hopeless, so in practice our maps will fail to match the logical/mathematical territory.
I agree that many problems in practice have a case where the probability of success is dependent on you somewhat, and that the probability of success is underdetermined, so it’s not totally useful to ask the question “what is the probability of success”.
On growth mindset, I mostly agree with it if we are considering any possible changes, but in practice, it’s usually referred to as the belief that one can reliably improve yourself with sheer willpower, and I’m way more skeptical that this actually works, for a combination of reasons. It would be nice to have that be true, and I’d say that a little bit of it could survive, but unfortunately I don’t think this actually works, for most problems. Of course, most is not all, and the fact that the world is probably heavy tailed goes some way to restore control, but still I don’t think growth mindset as defined is very right.
I’d say the largest issue about rationality that I have is that it generally ignores bounds on agents, and in general it gives little thought to what happens if agents are bounded in their ability to think or do things. It’s perhaps the biggest reason why I suspect a lot of paradoxes of rationality come about.
in practice our maps will fail to match the logical/mathematical territory.
It’s perhaps the biggest reason why I suspect a lot of paradoxes of rationality come about.
That’s an interesting hypothesis. Let’s see if that works for ![this problem]/(https://en.m.wikipedia.org/wiki/Bertrand_paradox_(probability)). Would you say Jaynes is the only one who manage to match the logical/mathematical territory? Or would you say he completely misses the point because his frame puts too much weight on « There must be one unique answer that is better than any other answer »? How would you try to reason two bayesians who would take opposite positions on this mathematical question?
//smaller
a wrong frame can only slow you down, not stop you. The catch is that the slowdown can be arbitrarily bad
This vocabulary feels misleading, like saying: We can break RSA with a fast algorithm. The catch is that it’s slow for some instances.
the general case is exponentially slow the more variables you add
(as a concrete application for permutation testing: if you randomize condition, fine, if you randomize pixels, not fine… because the latter is the general case while the former is the special case)
though it can’t be too strong, or else we’d be able to do anything we couldn’t do today.
I don’t get this sentence.
I don’t think growth mindset [as the belief that one can reliably improve yourself with sheer willpower] is very right.
That’s an interesting hypothesis. Let’s see if that works for ![this problem]/(https://en.m.wikipedia.org/wiki/Bertrand_paradox_(probability)). Would you say Jaynes is the only one who manage to match the logical/mathematical territory? Or would you say he completely misses the point because his frame puts too much weight on « There must be one unique answer that is better than any other answer »? How would you try to reason two bayesians who would take opposite positions on this mathematical question?
Basically, you sort of mentioned it yourself: There is no unique answer to the question, so the question as given underdetermines the answer. There is more than 1 solution, and that’s fine. This means that the question to answer is not one-to-one, so some choices must be made here.
This vocabulary feels misleading, like saying: We can break RSA with a fast algorithm. The catch is that it’s slow for some instances.
This is indeed the problem. I never stated that it must be a reasonable amount of time, and that’s arguably the biggest issue here: Bounded rationality is important, much more important than we realize, because there is limited time and memory/resources to dedicate to problems.
(as a concrete application for permutation testing: if you randomize condition, fine, if you randomize pixels, not fine… because the latter is the general case while the former is the special case)
The point here is I was trying to answer the question of why there’s no universal frame, or at least why logic/mathematics isn’t a useful universal frame, and the results are important here in this context.
I don’t get this sentence.
I didn’t speak well, and I want to either edit or remove this sentence.
Okay, the biggest disagreements with stuff like growth mindset is I believe a lot of your outcomes are due to luck/chance events swinging in your favor. Heavy tails sort of restores some control, since a single action can have large impact, thus even a little control multiplies, but a key claim I’m making is that a lot of your outcomes are due to luck/chance, and the stuff that isn’t luck probably isn’t stuff you control yet, and that we post-hoc a merit/growth based story even when in reality luck did a lot of the work.
The point here is I was trying to answer the question of why there’s no universal frame, or at least why logic/mathematics isn’t a useful universal frame, and the results are important here in this context.
Great point!
There is no unique answer to the question, so the question as given underdetermines the answer.
That’s how I feel about most interesting questions.
Bounded rationality is important, much more important than we realize
Do you feel IP=PSPACE relevant on this?
a key claim I’m making is that a lot of your outcomes are due to luck/chance, and the stuff that isn’t luck probably isn’t stuff you control yet, and that we post-hoc a merit/growth based story
Sure And there’s some luck and things I don’t control in who I had children with. Should I feel less grateful because someone else could have done the same?
Sure And there’s some luck and things I don’t control in who I had children with. Should I feel less grateful because someone else could have done the same?
No. It has a lot of other implications, just not this one.
Do you feel IP=PSPACE relevant on this?
Yes, but in general computational complexity/bounded computation matter a lot more than people think.
That’s how I feel about most interesting questions.
While I generally agree with you, I don’t think growth mindset actually works, or at least is wildly misleading in what it can do.
Re the issue where talking about the probability of success is the wrong question to ask, I think the key question here is how undetermined the probability of success is, or how much it’s conditional on your own actions.
That’s a great point, but aren’t you saying the same thing in disguise?
To me you’re both saying « The map is not the territory. ».
If I map a question about nutrition using a frame known useful for thermodynamics, I’ll make mistakes even if rational (because I’d fail to ask the right questions). But « asking the right question » is something I count as « my own actions, potentially », so you could totally reword that as « success is conditional on my own decision to stop gathering the thermodynamic frame for thinking about nutrition ».
Also I’d say that what you can call « growth mindset » (as anything for mental health, especially placebos) can sometime help you. And by « you » I mean « me », of course. 😉
In general, I think that a wrong frame can only slow you down, not stop you. The catch is that the slowdown can be arbitrarily bad, which is the biggest problem here.
You can use a logic/math frame to answer a whole lot of questions, but the general case is exponentially slow the more variables you add, and that’s in a bounded logic/math frame, with relatively simplistic logics. Any more complexity and we immediately run into ever more intractability, and this goes on for a long time.
I’d say the more appropriate saying is that the map is equivalent to the territory, but computing the map is in the general case completely hopeless, so in practice our maps will fail to match the logical/mathematical territory.
I agree that many problems in practice have a case where the probability of success is dependent on you somewhat, and that the probability of success is underdetermined, so it’s not totally useful to ask the question “what is the probability of success”.
On growth mindset, I mostly agree with it if we are considering any possible changes, but in practice, it’s usually referred to as the belief that one can reliably improve yourself with sheer willpower, and I’m way more skeptical that this actually works, for a combination of reasons. It would be nice to have that be true, and I’d say that a little bit of it could survive, but unfortunately I don’t think this actually works, for most problems. Of course, most is not all, and the fact that the world is probably heavy tailed goes some way to restore control, but still I don’t think growth mindset as defined is very right.
I’d say the largest issue about rationality that I have is that it generally ignores bounds on agents, and in general it gives little thought to what happens if agents are bounded in their ability to think or do things. It’s perhaps the biggest reason why I suspect a lot of paradoxes of rationality come about.
//bigger
That’s an interesting hypothesis. Let’s see if that works for ![this problem]/(https://en.m.wikipedia.org/wiki/Bertrand_paradox_(probability)). Would you say Jaynes is the only one who manage to match the logical/mathematical territory? Or would you say he completely misses the point because his frame puts too much weight on « There must be one unique answer that is better than any other answer »? How would you try to reason two bayesians who would take opposite positions on this mathematical question?
//smaller
This vocabulary feels misleading, like saying: We can break RSA with a fast algorithm. The catch is that it’s slow for some instances.
This proves too much, like the ![no free lunch]/(https://www.researchgate.net/publication/228671734_Toward_a_justification_of_meta-learning_Is_the_no_free_lunch_theorem_a_show-stopper). The catch is exactly the same: we don’t care about the general case. All we care about is the very small number of cases that can arise in practice.
(as a concrete application for permutation testing: if you randomize condition, fine, if you randomize pixels, not fine… because the latter is the general case while the former is the special case)
I don’t get this sentence.
That sounds reasonable, condition on interpreting sheer willpower as magical thinking rather than ![cultivating agency]/(https://www.lesswrong.com/posts/vL8A62CNK6hLMRp74/agency-begets-agency)
Basically, you sort of mentioned it yourself: There is no unique answer to the question, so the question as given underdetermines the answer. There is more than 1 solution, and that’s fine. This means that the question to answer is not one-to-one, so some choices must be made here.
This is indeed the problem. I never stated that it must be a reasonable amount of time, and that’s arguably the biggest issue here: Bounded rationality is important, much more important than we realize, because there is limited time and memory/resources to dedicate to problems.
The point here is I was trying to answer the question of why there’s no universal frame, or at least why logic/mathematics isn’t a useful universal frame, and the results are important here in this context.
I didn’t speak well, and I want to either edit or remove this sentence.
Okay, the biggest disagreements with stuff like growth mindset is I believe a lot of your outcomes are due to luck/chance events swinging in your favor. Heavy tails sort of restores some control, since a single action can have large impact, thus even a little control multiplies, but a key claim I’m making is that a lot of your outcomes are due to luck/chance, and the stuff that isn’t luck probably isn’t stuff you control yet, and that we post-hoc a merit/growth based story even when in reality luck did a lot of the work.
Great point!
That’s how I feel about most interesting questions.
Do you feel IP=PSPACE relevant on this?
Sure And there’s some luck and things I don’t control in who I had children with. Should I feel less grateful because someone else could have done the same?
No. It has a lot of other implications, just not this one.
Yes, but in general computational complexity/bounded computation matter a lot more than people think.
I definitely sympthatize with this view.