It seems generally quite bad for somebody like John to have to justify his research in order to have an income. A mind like this is better spent purely optimizing for exactly what he thinks is best, imo.
When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain’s background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you’ve mostly learned not to bullshit yourself, and you’re better off purely aiming your cognition based on what you yourself think you understand.
Paying people for what they do works great if most of their potential impact comes from activities you can verify. But if their most effective activities are things they have a hard time explaining to others (yet have intrinsic motivation to do), you could miss out on a lot of impact by requiring them instead to work on what’s verifiable.
The people who are much higher competence will behave in ways you don’t recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Your “deference limit” is the level of competence above your own at which you stop being able to reliable judge the difference.
Innovation on the frontier is anti-inductive. If you select people cautiously, you miss out on hiring people significantly more competent than you.[1]
Costs of compromise
Consider how the cost of compromising between optimisation criteria interacts with what part of the impact distribution you’re aiming for. If you’re searching for a project with top p% impact and top p% explainability-to-funders, you can expect only p^2 of projects to fit both criteria—assuming independence.
But I think it’s an open question how & when the distributions correlate. One reason to think they could sometimes be anticorrelated[sic] is that the projects with the highest explainability-to-funders are also more likely to receive adequate attention from profit-incentives alone.[2]
Consider funding people you are strictly confused by wrt what they prioritize
If someone believes something wild, and your response is strict confusion, that’s high value of information. You can only safely say they’re low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.
Alternatively, find something that is surprisingly popular—because if you don’t understand why someone believes something, you cannot exclude that they believe it for good reasons.[3]
The crucial freedom to say “oops!” frequently and immediately
Still, I really hope funders would consider funding the person instead of the project, since I think Johannes’ potential will be severely stifled unless he has the opportunity to go “oops! I guess I ought to be doing something else instead”as soon as he discovers some intractable bottleneck wrt his current project. (...) it would be a real shame if funding gave him an incentive to not notice reasons to pivot.[4]
Comment explaining why I think it would be good if exceptional researchers had basic income (evaluate candidates by their meta-level process rather than their object-level beliefs)
It quite aptly analogizes to the Nyquist frequencyfN, which is the highest [max frequency component] a signal can have before you lose the ability to uniquely infer its components from a given sample rate fs.
Also, I’m renaming it “Vingean disambiguation-limit”.[1]
P.S. fN=fs2, which means that you can only disambiguate signals whose max components are below half your sample rate. Above that point, and you start having ambiguities (aliases).
The “disambiguation limit” class has two members now. The inverse is the “disambiguation threshold”, which is the measure of power you require of your sampler/measuring-device in order to disambiguate between things-measured above a given measure.
...stating things as generally as feasible helps wrt finding metaphors. Hence the word-salad above. ^^′
It seems generally quite bad for somebody like John to have to justify his research in order to have an income. A mind like this is better spent purely optimizing for exactly what he thinks is best, imo.
When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain’s background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you’ve mostly learned not to bullshit yourself, and you’re better off purely aiming your cognition based on what you yourself think you understand.
Vingean deference-limits + anti-inductive innovation-frontier
Costs of compromise
Consider funding people you are strictly confused by wrt what they prioritize
The crucial freedom to say “oops!” frequently and immediately
Comment explaining why I think it would be good if exceptional researchers had basic income (evaluate candidates by their meta-level process rather than their object-level beliefs)
Comment explaining what costs of compromise in conjunctive search implies for when you’re “sampling for outliers”
Comment explaining my approach to finding usefwl information in general
Comment explaining why I think funding Johannes is an exceptionally good idea
re the Vingean deference-limit thing above:
It quite aptly analogizes to the Nyquist frequency fN, which is the highest [max frequency component] a signal can have before you lose the ability to uniquely infer its components from a given sample rate fs.
Also, I’m renaming it “Vingean disambiguation-limit”.[1]
P.S. fN=fs2, which means that you can only disambiguate signals whose max components are below half your sample rate. Above that point, and you start having ambiguities (aliases).
The “disambiguation limit” class has two members now. The inverse is the “disambiguation threshold”, which is the measure of power you require of your sampler/measuring-device in order to disambiguate between things-measured above a given measure.
...stating things as generally as feasible helps wrt finding metaphors. Hence the word-salad above. ^^′