Another graduate student, I have in general heard a similar opinions from many professors through undergrad and grad school. Never disdan for bays but often something along the lines of “I am not so sure about that” or “I never really grasped the concept/need for bayes.” The statistics books that have been required for classes, in my opinion durring the class, used a slightly negative tone while discussing bayes and ‘subjective probability.’
Davorak
It does charge a 5% fee which is not small.
How about college newspapers, forums, meetups, talks, casual lunches and what ever else works. Colleges often act as small semi-closed social ecosystems so it is easier to reach the critical number needed for a self sustaining community, or the critical number of people to take an idea from odd to normal.
Can you think of other online communities that suffer or at least go through great and unpredictable change due to a high influx of new people?
I have heard people talk of punishing abortion on par with other kinds of murder. This view point has the real potential to alienate people. It makes sense that people with that view point and realize this are not shouting it to the world or filing court cases. Instead they judge small changes are the best way to get what they want in the long term and fight those intermediary battles instead of taking it straight on.
For the people down who would down vote this, is it better if she did not respond to lukeprog’s post at all? Acknowledging someone when they attempt to communicate to you is considered polite. It often serves the purpose communicating a lack of spite and/or hard feels even as you insist on ending the current conversation.
We could have a google+ account open and offer to hangout with interested parties near by or far. I got the idea from: http://lesswrong.com/r/discussion/lw/731/meetup_proposal_google/
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it
At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
You are correct “Avoiding the evidence would be irrational.” does seem to be incorrect in general and I generalized too strongly from the example I was working on.
Though this does not seem to answer my original question. Is there a by definition conflict between “Whatever can be destroyed by the truth, should be.” and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.
Is the standard then that it’s instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one’s decisions?
It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one’s decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.
Additional necessary assumption seems to be that Alex cares about “Whatever can be destroyed by the truth, should be.” He is selfish but does his best to act rationally.
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well.
Alex is selfish and does not care that he is misleading people.
By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a “crappy” job of finding out what is false he has just maximized finding out the truth he cares about.
I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about “Whatever can be destroyed by the truth, should be.”, as far as his personal beliefs are concerned.
The few specific situations that I drilled down on I found that “deliberately doing a crappy job of (a)” never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.
Then there is not conflict with “Whatever can be destroyed by the truth, should be.” because what needs to be destroyed is prioritized.
Can you provide a specific example where conflict with “Whatever can be destroyed by the truth, should be.” is ensured?
I do not see an obvious and direct conflict, can you provide an example?
Some sense that there’s something distinct about her which would mean that lukeprog
This something distinct, would a more detailed set of specs qualify? In your mind, is it that lukeprog seems to have few and shallow specs that bothers you? Or is your “distinct” distinct from specs entirely?
Do you see a difference between that, and stating a intention to leave the relationship if the other person has sex with someone else? Luckily I currently live in a time and place where these two scenarios are often functionally similar.
Can you give examples of beliefs and actions of people who believe they “own other people’s sexualities.”
I think I understand where you are coming from approximately, but for clarity what specifically would liking her entail above and beyond a set of specs?
Why wish for:
I wish I wasn’t as intelligent as I am, wish I was more normal mentally
and had less innate ability for math?
Why not just with for being better at socializing/communicating?
By:
our cultural sentiments surrounding meat consumption
Do you mean the rationalist community or the human community at large?
There seem to be several problems with the reasoning displayed in your post.
Could you communicate what you want people to take a way from this so I can put the post in a proper context and decide how to communicate the problems I see?