Perhaps, in a parallel to the kings earlier mentioned, this could be interpreted as Orion having seen the fortunes of continents rise and fall. Orion has seen the prominence of Africa as the source of humanity, and its subjugation by Europe; it has seen the isolation and the global power of the Americas; it has seen the mercantile empires of the West and its dark ages.
AndHisHorse
While, if successful, such an epistemic technology would be incredibly valuable, I think that the possibility of failure should give us pause. In the worst case, this effectively has the same properties as arbitrary censorship: one side “wins” and gets to decide what is legitimate, and what counts towards changing the consensus, afterwards, perhaps by manipulating the definitions of success or testability. Unlike in sports, where the thing being evaluated and the thing doing the evaluating are generally separate (the success or failure of athletes doesn’t impede the abilities of statisticians, and vice versa), there is a risk that the system is both its subject and its controller.
I do think “[a]bility to contribute to the thought process seems under-valued” is very relevant here. A prediction-tracking system captures one...layer[^1], I suppose, of intellectuals; the layer that is concerned with making frequent, specific, testable predictions about imminent events. Those who make theories that are more vague, or with more complex outcomes, or even less frequent[^2][^3], while perhaps instrumental to the frequent, specific, testable predictors, would not be recognized, unless there were some sort of complex system compelling the assignment of credit to the vague contributors (and presumably to their vague contributors, et cetera, across the entire intellectual lineage or at least some maximum feasible depth).
This would be useful to help the lay public understand outcomes of events, but not necessarily useful in helping them learn about the actual models behind them; it leaves them with models like “trust Alice, Bob, and Carol, but not Dan, Eve, or Frank” rather than “Alice, Bob, and Carol all subscribe to George’s microeconomic theory which says that wages are determined by the House of Mars, and Dan, Eve, and Frank’s failure to predict changes in household income using Helena’s theory that wage increases are caused by three-ghost visitations to CEOs’ dreams substantially discredits it”. Intellectuals could declare that their successes or failures, or those of their peers, were due to adherence to a specific theory, or the lay people could try to infer as such, but this is another layer of intellectual analysis that is nontrivial unless everyone wears jerseys declaring what theoretical school of thought they follow (useful if there are a few major schools of thought in a field and the main conflict is between them, in which case we really ought to be ranking those instead of individuals; not terribly useful otherwise).
[^1]: I do not mean to imply here that such intellectuals are above or below other sorts. I use layer here in the same way that it is used in neural networks, denoting that its elements are posterior to other layers and closer to a human-readable/human-valued result.
[^2]: For example, someone who predicts the weather will have much more opportunity to be trusted than someone who predicts elections. Perhaps this is how it should be; while the latter are less frequent, they will likely have a wider spread, and if our overall confidence in election-predicting intellectuals is lower than in our predictions of weather-predicting intellectuals, that might just be the right response to a field with relatively fewer data points: less confidence in any specific prediction or source of knowledge.
[^3] On the other hand, these intellectuals may be less applied not because of the nature of their field, but the nature of their specialization; a grand an abstract genius could produce incredibly detailed models of the world, and the several people who run the numbers on those models would be the ones rewarded with a track record of successful predictions.
Why _haven’t_ they already switched? Presumably, these companies are full of people with some vague incentives that point at maximizing efficacy, but they’re leaving a “clearly superior” product on the table. It may be that the answer is that this is some sort of systemic, widespread failure of decision-making, or a decision-making success under different criteria (lower tolerance for the risk of change, perhaps, than these same systems have now) rather than a reflection of some inadequacy of RT-LAMP, but “the folks with the expertise and incentive to get it right are all getting it wrong and leaving money on the table” sounds like a more complex explanation than “there are shortcomings to RT-LAMP that I haven’t considered”, and I’d like to see some further evidence in favor of it.
You may be familiar with the term “Technological Singularity” as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.
I don’t believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter post-Singularity in such a way that it would be wise to include such a term in one’s expected value and decision-making. It would be not entirely unlike buying stock based on which companies would most benefit from the announcement of an incoming Earth-shattering asteroid. The development of superintelligent AGI is an existential threat to just about every institution, including the stock market and our current conception of the economy in general. A rational, entirely selfish actor or aggregate thereof does not make plans for what happens after its death.
However, I must admit that I have no data on the subject, and while I would not guess that there is much relevant data available, I imagine there is some—did the U.S. stock market account for what companies might be most successful in the case of a Soviet conquest of the U.S.? Is the potential profitability of a company in a world transformed by a global Communist revolution accounted for in its current stock price? I do not know, but I would be very surprised to learn that the stock market priced scenarios in which it and the institutions on which it depends are unlikely to continue to exist in recognizable forms.
The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the “computer” and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind (such as a natural human mind) and a Chinese Room. In such a case, the issue raised by this post is an extension of the Chinese Room problem, and may not require a separate answer, but does do the notable service of illustrating a continuum along which the Chinese Room lies, rather than a binary.
I’m not sure if this is a brilliantly ironic example of the lack of absolute applicability of these guidelines or just a happy accident.
Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm—so a single sperm isn’t going to get very far.
In addition to enjoying the content, I liked the illustrations, which I did not find necessary for understanding but which did break up the text nicely. I encourage you to continue using them.
1) Historical counter-examples are valid. Counter-examples of the form of “if you had followed this premise at that time, with the information available in that circumstance, you would have come to a conclusion we now recognize as incorrect” are valid and, in my opinion, quite good. Alternately, this other person has a very stupid argument; just ask about other things which tend to be correlated with what we consider “advanced”, such as low infant mortality rates (does that mean human value lies entirely in surviving to age five?) or taller buildings (is the United Arab Emirates is the objectively best country?).
2) ”Does life have meaning” is a confused question. Define what “meaning” means in whatever context it is being used before engaging in any further debate, otherwise you will be arguing over definitions indefinitely and never know it. Your argument does sound suspiciously similar to Pascal’s Wager, which I suspect other commenters are more qualified to dissect than I am.
I agree that growth shouldn’t be a big huge marker of success (at least at this point), but even if it’s not a metric on which we place high terminal value, it can still be a very instrumentally valuable metric—for example, if our insight rate per person is very expensive to increase, and growth is our most effective way to increase total insight.
So while growth should be sacrificed for impact on other metrics—for example, if growth is has a strong negative impact on insight rate per person—I would say it’s still reasonable to assume it’s valuable until proven otherwise.
Are we in any real danger of growing too quickly? If so, this is relevant advice; if not—if, for example, a doubling of our growth rate would bring no significant additional danger—I think this advice has negative value by making an improbable danger more salient.
Not necessarily; the three sorts of excellent organizations you mention are organizations whose excellence is recognized by the rest of the world in some way, granting its members prestige, opportunities, and money. I suspect this is what attracts people to a large extent, not a general ability to detect organizational goodness. This sort of recognition may be very difficult to get without being very good at whatever it is the organization does, but that does not imply that all good organizations are attractive in this way.
Having recently read The Craft & The Community: A Post-Mortem & Resurrection I think that its advice on recruiting makes a lot of sense: meet people in person, evaluate whom you think would be a good fit—especially those who cover skill or viewpoint gaps that we have—and bring them to in-person events.,
I would be very interested in reading, say a blog post (or series thereof) exploring why this happens (and, if remotely possible, directing motivated individuals towards ways to support faster adoption of successful treatments).
First, I think this is an excellent idea, and I wish you the best of luck.
Second, what mechanisms do you have in place for getting feedback about the content you produce? I’m aware that for a broadcast medium using a platform over which you do not have full control, your feasible options may be limited, but I strongly encourage you to consider (possibly when this project has reached a stable state, because this will take a non-trivial amount of resources) some amount of focus group A/B testing for comprehension and internalization. From the beginning, you should probably have one or two individuals close to your target audience (i.e. Italian-speaking, without prior Rationality experience) off of whom to bounce ideas. Yours is an ambitious plan and I would hate for it to lose contact with reality.
Third, if you are doing this at least in part as a response to irrationality in voter choices, I suggest (based on my awareness of the situation in the US) focusing on:
What statistics are comparable to each other? e.g. Politician P says that Group G is responsible for X% of Crimes. How does this compare to the national average? How does this compare to the national average when weighted by socioeconomic status to reflect the socioeconomic distribution of Group G? What factors could explain this, and which numbers are the right ones to use as a baseline?
Conservation of evidence: if a given study/exploration/piece of possible evidence has two outcomes, they can’t both make you more confident in a given position. The examples I’ve seen used in this community are in [this article](http://lesswrong.com/lw/ii/conservationof expected_evidence/).
I think this is a very valuable concept to keep fresh in the public consciousness.
However, I think it is in need of better editing; right now its formatting and organization make it, for me at least, less engaging. This is less of an issue because it’s short; I imagine that a longer piece in the same style would suffer more reader attrition.
It might help to read over your piece and then try to distill it down to the essentials, repeatedly; it reads right now as if it is only a few steps removed from straight stream-of-consciousness. Or it might not; at this point I’m speculating wildly about the creative processes of someone I’ve never met, so take my implementation advice with a grain of salt.
Either way, I look forward to reading more of your insights.
Perhaps part of the desire to avoid conformity is a desire to avoid comparability, for fear of where one might end up in a comparison.
If I am one of one hundred people doing the same thing in the same way—working on a particular part of an important problem, or embracing a very specific style—I run the psychological risk of discovering that I am strictly worse than a large number of other people.
If, instead, I am one of one hundred people doing different things in different ways, things about me—the skills I bring to bear on the problem—cannot easily be compared and found wanting. I am protected from the threat to my self-esteem by the confusion of the variety in approaches, which I can easily blame even if my efforts produce results which are comparable and inferior to others’.
You have the right to have beliefs which you know or could reasonably conclude are probably false, though it is advisable you not exercise it.
You have the right to have beliefs which you have reason to believe are probably true, even if an overwhelming majority of well-informed experts disagrees, though it is advisable you exercise it only when you have a very good reason to believe you are right (i.e. when you have carefully considered expert majority disagreement as evidence of a strength relative to the capability of the experts and the nature of the system of incentives in which they operate, and have sufficiently strong evidence in the other direction).
You have the right to make a series of bald assertions on variations of these rights, interwoven such as not to imply a distinction between the advisable and the inadvisable, and in such large numbers that disagreement over any specific point can be dismissed as only minorly affecting the conclusion, and that refuting all points would be difficult due to the limitations of the forum in which they are posted.
You have the right to claim that anything is a duty, but everyone else has the right to ignore it.
This seems quite similar to the “Gish gallop” rhetorical technique.