Thiel enjoys the spotlight, he’s his own boss and could spend all day rolling around in giant piles of money if he wanted to, he’s said plenty of things publicly that are way more NRx-y than the monopoly thing and he’s obviously fine.
Oligopsony
I give more to charity and use spaced repetition systems heavily.
If the demons understand harm and are very clever in figuring out what will lead to it, what happens when we ask them minimize harm, or maximize utility, or do the opposite of what it would want to do otherwise, or {rigidly specified version of something like this}?
Can we force demons to tell us (for instance) how they’d rank various policy packages in government, what personal choices they’d prefer I make, &c., so we can back-engineer what not to do? They’re not infinitely clever, but how clever are they?
The issue isn’t whether looks are objective (clearly they aren’t,) but whether judgments of looks are more correlated among the userbase than those of personality.
(Actually, the degree to which personality is correlated is probably the more interesting question here (granting that interestingness isn’t particularly objective either.) Robin Hanson has pointed to some studies that suggest that “compatibility” isn’t really a thing and some people are just easier to get along with than others—the study in question IIRC didn’t take selection effects into account, but it remains an interesting hypothesis.)
It was a garbled version of Angkorism, sorry.
If your point is that Openness is probably not a thing-in-the-world, I would be inclined to agree, actually.
Big Five Openness correlates with political liberalism, so cet par it would be weak Bayesian evidence for open-mindedness, even if it is not an example of it.
I am completely uninformed on the technical particulars here, so this is idle speculation. But it isn’t totally implausible that ideological factors were at play here. By this I don’t mean that there were arguments being deployed as soldiers—nothing political, as far as I’m aware, rides upon the two theories—but that worldviews may have primed scientists (acting in entirely good faith) to think of, and see as more reasonable, certain hypotheses. Dialectical materialism, for instance, tends to emphasize (or, by default, think in terms of) qualitative transformations that arise from historically specific tensions between different forces that eventually gets resolved (in said qualitative transformations.) If I understand you correctly that the difference between the two theories was that the American one isolated a process (1) explicable by the properties of a single substance and (2) acting at all times in Earth’s history, while the Soviet one isolated a process (1) explicable in terms of the interaction of forces and (2) only active until it the conditions for it (stores of primordial methane) were resolved, then it’s easy to construct a just-so story about how a scientist thinking in the categories privileged by diamat might find the second more intuitive than the first. Likewise, if, as a stereotypical reductive mechanist, you tend to think of individual objects rather than relationships, and eternal laws rather than historically specific ones, the former might be more intuitive than the latter. Further, it seems at least facially plausible that if you had a scientific community with Aristotelian or German idealist frameworks, you’d have different dominant theories still—even with researchers acting in good faith, with lots of data, and material incentives to produce a theory that derived correct predictions. (Such frameworks bear some similarities to, but are more vague and general than, Kuhnian paradigms.)
Of course, I could totally misunderstand the nature of the two theories at play, and I don’t know anything about the geological communities of the two superpowers specifically, so the just-so stories here are probably complete bullshit. But your concerns are more general than the specific examples as well, so consider their purpose to be illustrative rather than explanatory.
And that willingness to invest such time might correlate with certain factors.
For present purposes, I suppose it includes any domain including the defense of lying itself.
All this needs the disclaimer that some domains should be lie-free zones. I value the truth and despise those who would corrupt intellectual discourse with lies.
Can anyone point me to a defense of corrupting intellectual discourse with lies (that doesn’t resolve into a two-tier model of elites or insiders for whom truth is required and masses/outsiders for whom it is not?) Obviously there is at least one really good reason why espousing such a viewpoint would be rare, but I assume that, by the law of large numbers, there’s probably an extant example somewhere.
At LessWrong there’ve been discussions of several different views all described as “radical honesty.” No one I know of, though, has advocated Radical Honesty as defined by psychotherapist Brad Blanton, which (among other things) demands that people share every negative thought they have about other people. (If you haven’t, I recommend reading A. J. Jacobs on Blanton’s movement.) While I’m glad no one here is thinks Blanton’s version of radical honesty is a good idea, a strict no-lies policy can sometimes have effects that are just as disastrous.
To point out the obvious, speaking from personal experience, this is indeed a terrible idea.
A couple of months ago I told a lie to someone I cared about. This wasn’t a justified lie; it was a pretty lousy lie (both in its justifiability and the skill with which I executed it) and I was immediately exposed by facial cues. I felt pretty awful because a lot of my self-concept up to that point had been based around being a very honest person, and from that point on, I decided to treat my “you shouldn’t tell her ___” intuitions as direct orders from my conscience to reveal exactly that thing, and to pay close attention to whether the meaning of what I’ve said deviates from the truth in a direction favorable to me, and as a consequence, I now feel rising anxiety whenever I feel some embarrassing thought followed by the need to confess it. I also resolved to search my conscience for any bad deeds I may have forgotten, which actually led to compulsive fantastic searching for terrible things I might have done and repressed, no matter how absurd (I’ve gotten moslty-successful help about this part.) She’s long since forgiven me for the original lie and what I lied about, but continues to find this compulsive confessional behavior extremely annoying, and I doubt I could really function if I experienced it around people in general rather than her specifically.
This, but in a more general sense for the first: Pascal thought there were a bunch of sophisticated philosophical reasons that you should be a Catholic; the Wager was just the one he’s famous for.
I suspect this was written and is being upvoted in very different senses.
See also Hanson’s less than enthusiastic review.
Amusingly enough, the example of TrollBot that came to mind was the God expounded on in many parts of the New Testament, who will punish you iff you do not unconditionally cooperate with others, including your oppressors.
To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least.
Is it? Again, I haven’t done the math, but look at the behavior of minor parties in parliamentary systems. They typically demand a price for their support. If the Republican will get your vote regardless why should they care about you?
Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like “this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah” and then follow with what looks to me like a completely original thought that I’ve never seen before.
Rather than—or at least in addition to—being a bug, this strikes me as one of charity’s features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.
Another consideration is the effects of your decision criteria on the lesser evil itself. All else being equal, and assuming your politics aren’t so unbelievably unimaginative that you see yourself somewhere between the two mainstream alternatives, you should prefer the lesser evil to be more beholden to its base. The logic of this should be most evident in parliamentary systems, where third party voters can explicitly coordinate and sometimes back and sometimes withdraw support from their nearest mainstream parties, depending on policy concessions.
If it’s digitally embedded, even if the “base” module was bad at math in the same way we are, it would be trivial to cybernetically link it to a calculator program, just as us physical humans are cyborgs when we use physical calculators (albeit with a greater delay than a digital being would have to deal with.)