What does it mean “studied it at university”? Do you mean something like “Took econ 101 and 102 as part of gen ed requirements” or “majored in economics”?
JQuinton
A random comment.
This is the first time I’ve seen “anti-agathics”. Based on what I know of biblical Greek, I read this as something that would be like “anti-good”. If I had been in charge of making up an anti-aging drug, I would have called it something like anti-presbycs (maybe that wasn’t chosen because it looks too much like “presbyterian”? Presbyterian does derive from the world meaning “elder”...).
This isn’t a request to change the wording if that’s what people who will be taking the survey are familiar with BTW, just something I noticed. Carry on.
Miracle claims are on shaky epistemic grounds. How do you confirm it was a miracle and not someone being mistaken about some phenomenon? Or more likely, that they don’t have enough knowledge of the physical or cognitive sciences to know whether some phenomenon is possible or miraculous?
The proper use of humility is to take into account that we are human beings and we make mistakes and we have insufficient information, so we should try to anticipate our mistakes or lack of info and correct for them in advance. Meaning that one should have the prior for “I’m a flawed human being” higher than the prior that something was a miracle. Indeed, one should always take into account alternative explanations.
When I was 16, I wanted to follow in my grandfathers footsteps. I wanted to be a tradesman. I wanted to build things, and fix things, and make things with my own two hands. This was my passion, and I followed it for years. I took all the shop classes at school, and did all I could to absorb the knowledge and skill that came so easily to my granddad. Unfortunately, the handy gene skipped over me, and I became frustrated. But I remained determined to do whatever it took to become a tradesman.
One day, I brought home a sconce from woodshop that looked like a paramecium, and after a heavy sigh, my grandfather told me the truth. He explained that my life would be a lot more satisfying and productive if I got myself a different kind of toolbox. This was almost certainly the best advice I’ve ever received, but at the time, it was crushing. It felt contradictory to everything I knew about persistence, and the importance of “staying the course.” It felt like quitting. But here’s the “dirty truth,” Stephen. “Staying the course” only makes sense if you’re headed in a sensible direction. Because passion and persistence – while most often associated with success – are also essential ingredients of futility.
That’s why I would never advise anyone to “follow their passion” until I understand who they are, what they want, and why they want it. Even then, I’d be cautious. Passion is too important to be without, but too fickle to be guided by. Which is why I’m more inclined to say, “Don’t Follow Your Passion, But Always Bring it With You.”
Modus ponens can be demonstrated to be a valid assumption by drawing up a truth table. How do you demonstrate that “people are more likely to believe true things”?
People are more likely to believe true things
How do you know this?
People tend not to believe things because they’re true, but for some other reason.
Pr(People Believe | True) < Pr(People Believe | Some other explanation)? I would hazard to guess that the number of untrue things people have believed all throughout human history overshadows the number of things they (we) have believed that were actually true.
It’s a bit of an ad hominem, but logical fallacies can be viewed as weak Bayesian evidence.
In the military, we had sort of ready-made memes for dealing with decision anxiety. In leadership schools it is taught with more seriousness, but in the field (so to say) we would just refer to it as “making a (fucking!) command decision”. Since being in the military you have to be prepared for making a decision in a life or death situation, time is critically important. So it was drilled into us to make any decision if we have significant and/or crippling anxiety about the choices to be made. If a bad decision is made, so what? Suck it up and press on (another military turn of phrase). You can correct for it later.
One vivid example was when I was in charge of the military ceremony for a somewhat well publicized funeral. An airman had been killed in Afghanistan (funerals for active duty members pretty much get the full production for a funeral, like what you would see in some epic war movie). We had a plan for where everyone would be during the funeral, where and how we would carry the casket, etc. Of course, like the greatest plans of mice and, well, you know, the hearse pulled up in a place where we completely did not expect it to. This was no time to sit down patiently and redraw our plan, so I had to make a command decision and change things at the last minute to make sure it looked like we knew what we were doing.
A takeaway from this would be to give yourself a time limit for making a decision. People seem to refuse to make a decision if they have too many options; time should be factored in as a sort of 4th dimension of decision options. Paring down possible options should also include trimming the time limit to make the decision. That might make the decision process easier.
...and that’s why anthropics doesn’t explain why the Cold War stayed cold.
The motivating practical problem came from this question,
“guess the rule governing the following sequence” 11, 31, 41, 61, 71, 101, 131, …
I cried, “Ah the sequence is increasing!” With pride I looked into the back of the book and found the answer “primes ending in 1″.
I’m trying to zone in on what I did wrong.
If I had said instead, the sequence is a list of numbers—that would be stupider, but well inline with my previous logic.
My first attempt at explaining my mistake, was by arguing “it’s an increasing sequence” was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).
I think of it in terms of making a $100 bet.
So you have the sequence S: 11, 31, 41, 61, 71, 101, 131.
A: is the “bet” (i.e. hypothesis) that the sequence is increasing by primes ending in 1. There are very few sequences (below the number 150) you can write where you have an increasing sequence of primes ending in 1, so your “bet” is to go all in.
B: is the “bet” that the sequence is increasing. But a “sequence that’s increasing” spreads more of its money around so it’s not a very confident bet. Why does it spread more of its money around?
If we introduced a second sequence X: 14, 32, 42, 76, 96, 110, 125
You can still see that B can account for this sequence as well, whereas A does not. So B has to at least spread its betting money between the two sequences presented A and X just in case either of those are the answer presented in the back of the book. In reality there are an untold amount of sequences that B can account for besides the two here. Meaning that B has to spread its betting money to all of those sequences if B wants to “win” by “correctly guessing” what the answer was in the back of the book. This is what makes it a bad bet; a hypothesis that is too general.
This is a simple mathematical way you can compare the two “bets” via conditional probabilities:
Pr(B | S) + Pr(B | X) + Pr(B | ??) = 1.00 and Pr(A | S) + Pr(A | X) + Pr(A | ??) = 1.00
Pr(A | S) is already all in because the A bet only fits something that looks like S. Pr(B | S) is less than all in because Pr(B | X) is also a possibility as well as any other increasing sequence of numbers, Pr(B | ???). This is a fancy way of saying that the strength of a hypothesis lies in what it can’t explain, not what it can; ask not what your hypothesis predicts, but what it excludes.
Going by what each bet excludes you can see that Pr(A | ??) < Pr(B | ??), even if we don’t have any hard and fast number for them. While there is a limited amount of 7 number patterns below 150 that are increasing, this is a much larger set than the amount of 7 number patterns below 150 that are increasing by primes ending in 1.
I’m guessing that the rule P(A & B) < P(A) is for independent variables (though it’s actually more accurate to say P(A & B) ⇐ P(A) ). If you have dependent variables, then you use Bayes Theorem to update. P(A & B) is different from P(A | B). P(A & B) ⇐ P(A) is always true, but not so for P(A | B) viz. P(A).
This is probably an incomplete or inadequate explanation, though. I think there was a thread about this a long time ago, but I can’t find it. My Google-fu is not that strong.
You can make an anthropic reasoning argument using any almost-wiped out ethnicity.
For example, Native Americans. Someone born to a Native American tribe is more likely to live in a world where Europe didn’t successfully colonize the Americas than the current timeline. It’s the same anthropic reasoning, but the problem is that it’s fallacious to rest an entire argument on that one piece of evidence.
Unless I’m missing something, this version of anthropic reasoning seems to be making this argument: Pr(E | H) = Pr(H | E).
Using a car analogy, I would say that intelligence is how strong your engine is. Whereas rationality is driving in a way where you get to your destination efficiently and alive. Someone can have a car with a really powerful engine, but they might drive recklessly or only have the huge engine for signalling purposes while not actually using their car to get to a particular destination.
- Sep 12, 2015, 4:30 AM; 0 points) 's comment on What Exactly Do We Mean By “Rationality”? by (
Question about Bayesian updates.
Say Jane goes to get a cancer screening. 5% prior of having cancer, the machine has a success rate of 80% and a false positive rate of 9%. Jane gets a positive on the test and so she now has a ~30% chance of having cancer.
Jane goes to get a second opinion across the country. A second cancer screening (same success/false positive rates) says she doesn’t have cancer. What is her probability for having cancer now?
An Experiment In Social Status: Software Engineer vs. Data Science Manager
I made a heavy metal cover of the final boss’ theme from the arcade version of Strider https://www.youtube.com/watch?v=MQBy4X9Jr7g
I’ve submitted it to the OCRemix webpage so hopefully it will get accepted sometime this… year?
I was also noodling around with Java and made a Bayes Theorem ex jar with neat little slidy-bars.
I’ve also started a Master’s program in Compsci.
You should probably be skeptical when presented with binary hypotheses (either by someone else or by default). Say in this example that H1 is “emergence”. The alternative for H1 isn’t “mind-stuff” but simply ~H1. This includes the possibility of “mind-stuff” but also any alternatives to both emergence and mindstuff. Maybe a good rule to follow would be to assume and account for your ignorance from the beginning instead of trying to notice it.
One way to make this explicit might be to always have at least three hypotheses: One in favor, one for an alternative, and a catchall for ignorance; the catchall reflecting the little that you know about the subject. The less you know about the subject, the larger your bucket.
Maybe in this case, your ignorance allocation (i.e. prior probability for ignorance) is 50%. This would leave 50% to share between the emergence hypothesis and the mindstuff hypothesis. I personally think that the mindstuff hypothesis is pretty close to zero, so the remainder would be in favor of emergence, even if it’s wrong. In this case, “emergence” is asserted to be a non-explanation, but this could probably be demonstrated in some way, like sharing likelihood ratios; that might even show that “mindstuff” is an equally vapid explanation for consciousness.
Yes, “Tell me more” is certainly more effective than saying something like “I don’t think that’s true”. Even if you don’t think it’s true, following a Socractic dialog will probably be more useful at uncovering untruth without being overtly offensive.
I suggested in another thread that successive downvotes on (1) one person’s account (2) over a certain number of downvotes (3) within a set period of time should prompt the system to tell the user that they have to sacrifice personal karma until (x) days later in order to use up/downvotes.
Something like this is already in place, where a person has to sacrifice karma in order to comment on a post that itself is below a certain karma threshold.
Do you think the survey should also take into account BMI + bodyfat % if it includes fitness questions?