Open thread, Oct. 12 - Oct. 18, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Hi! I’d like the minimum amount of karma needed to make a post about the Bay Area Solstice. [Edit: Now that I have it, disregard that message. Thank you, and that’s why this post has this amount of karma.]
RSVP on Facebook
Get tickets on Eventbrite
If you want more karma you can post as Discussion and then use that karma to propagate to Main. Also you could add comments giving a summary here.
There might be an alien civilization building stuff in its solar system.
If this turns out to be aliens rather than a low-probability astronomical event, does it imply that getting out into space is a lot harder than it sounds?
I’ve read the original paper. http://arxiv.org/pdf/1509.03622v1.pdf
There is no infrared excess—that is the weirdest part of the whole thing. It means that there isn’t a large system-spanning amount of material heated by the star and radiating in the infrared, that we are just seeing a small fraction of as it happens to pass in front of the star from our angle. Instead, there must be only a small amount of material that we are seeing a reasonable fraction of each time it occults the star. An infrared excess does not depend on the type of material, merely its surface area.
This and the irregular deep nature of the occultation is very strange—large deep occultations mean the matter has to be diffuse rather than something like a planet, irregularity means theres probably multiple clumps, but the lack of infrared excess means we have to be seeing a pretty good fraction of it. The brightness of the star also wiggles a little bit on timecales of ~20 days for part of the dataset, in a manner they don’t know how to interpret.
The leading theories are:
1 - Dust clumps generated from a giant impact between two planets, spread around the orbital range of that planet. Should be some infrared excess in that case though, and the odds of happening to see that in a system that isn’t actively forming are ridiculously tiny.
2 - Exocomet storm in which large, icy dusty objects rain down practically on top of the star and poof into dust, then zoom back out into the outer system, possibly with one large ancestral object breaking up into multiple ones that share an orbit and pass next to the star at irregular intervals like our own solar system’s Kreutz sungrazers. In this case large amounts of dust would be irregularly generated in close proximity to the star where we are much more likely to see them pass in front. When they went back and looked at the star with other instruments, they found a passing red dwarf star only about 1,000 AUs out which could definitely disturb the far outer system and an Oort cloud equivalent.
3 - Something new, some kind of semi-stable clumpy low mass dust belt or a new form of chaotic variable star. Or the astrometry that ruled out certain classes of explanation being wrong.
Anyone want to take bets on whether or not this will turn out in ten years to be natural?
As much as I would like for it to be aliens, I think even a 1% belief that it’s aliens is privileging the hypothesis too much. Previous ‘weird’ cosmological objects have all turned out to have far more plausible natural explanations.
All this said, though, it does seem kind of natural for a civilization to put most of its effort into surviving in its own solar system—where energy is plentiful and communication is rapid—rather than spreading outward into tenuous space where the chances of survival are very low. It’s not obvious to me why a civilization should choose to colonize other solar systems. That said, if a civilization chose to do that and was successful in doing that, it would quickly become very populous, but it requires an initial impetus.
But how often does that have to happen? They only looked at about 150,000 stars. There are hundreds of billions in our galaxy alone, and if alien civilization developed even 1% earlier than ours, they’d have had time to colonize the entire Virgo supercluster, so long as they start near the center.
I’d say that at this point we are largely ignorant of the odds of intelligent life existing in a solar system. While at least some basic forms of life ought to be plentiful in the galaxy, the conditions for evolution from simple life to intelligent life (that is, civilization-building life) just aren’t understood to the level that would be required for ANY probability estimate to be given. Note that I’m not saying intelligent life is rare; I’m just saying that both scarcity and abundance of intelligent life are consistent with our current state of knowledge.
But that’s just the prior probability. I can still say that we have strong evidence that the probability of a given solar system having intelligent life is much, much lower than one in 150,000.
Or at least intelligent life that modifies its home system in a way that is visible from thousands of light years away.
I admit that a Dyson sphere seems like an arbitrary place to stop, but I think my basic argument stands either way. If any intelligent life was that common, some of it would spread.
It enforces a statement along the lines of “these aliens got space travel recently or getting out into space is a lot harder than it sounds.” That’s weak evidence, at least, for that claim.
But if those are aliens, then aliens must be common. And if aliens are common, then there should have been tons of them that got to the space travel point long enough ago to have reached us by now.
Given that the universe started a finite amount of time ago, and supposing there is easy space travel, then there is an interval during which the first colonists have intrastellar space travel but have not visibly done interstellar space travel, and we can estimate how long that interval is. They’re in that interval, or there isn’t easy space travel.
We cannot argue “because there is one, there must have been a previous one,” you can’t do that sort of induction on the natural numbers, eventually you hit one. We can argue it’s unlikely, sure, and we weigh that unlikelihood against the unlikelihood that interstellar travel is hard in order to determine what our posterior ends up being.
But that’s a lot of information. It’s a very short interval. Since it’s so unlikely to be in that interval, this is large evidence against easy space travel.
It’s a probabilistic argument. But what isn’t? There’s no argument that allows infinite certainty. At least, I’m pretty sure there isn’t.
I agree that it’s a lot of information. But it’s also the case that we have a lot of information about physics, such that interstellar space travel being difficult is also unlikely. Which unlikelihood is larger? That’s the question we need to ask and answer, not “the left side of the balance is very heavy.”
And that’s why my conclusion is “that wasn’t made by aliens.”
The general lack of space-going aliens suggests that getting into space is harder than it sounds.
Sure, but we already knew there was a general lack of space-going aliens. Presuming this is aliens, this moves us from “are we the first? Really?” to “are we only shortly after the first? Really?”
That’s one explanation, the other being “intelligent life is harder than it sounds” and another being “any life is harder than it sounds”.
Both of those fall under “are we the first? Really?”, or the related hypothesis that we’re shortly after the first. Or did you mean to respond to NancyLebovitz?
Sorry, that was meant to be a response to Nancy Lebovitz.
Or there are fewer civilizations than we expect, or something is wiping out civilizations once they go to space, or most species for whatever reason decide not to go to space, or we are living in an ancestor simulation which only does a detailed simulation of our solar system. (I agree that all of these are essentially wanting, your interpretation makes the most sense, these examples are listed more for completeness than anything else.)
https://www.nasa.gov/feature/jpl/strange-star-likely-swarmed-by-comets
How can they get a mess of objects whirling around a star without getting into space?
I probably should have used more exact language. The Fermi Paradox isn’t mostly about species puttering around in their home solar system—it’s about filling a galaxy.
Drats, foiled again!
According to Wikipedia, in Malaysia sale and importation of sex toys is illegal, but it doesn’t sound like there’s any law against using a vibrator you made yourself.
Sexology is not a science?
Would it be more scientific to make an interdiscipline between sexology and, uhm, computer science? Oh wait...
Any tips on eliciting good, honest personal feedback? I just got a rejection from a position I wanted and will have a call with the headhunter tomorrow. I’d like to extract something useful information out of it. Any tips of good question formulations?
E.g. in a survey I ask instead of “Do you use X?” the question “In the past 3 months how many times did you use X?” to get a less biased answer.
Any good questions/ideas?
The first answer here is pretty good, though doesn’t quite apply for my situation: https://www.quora.com/Whats-the-best-way-to-ask-for-personal-feedback-from-friends-and-coworkers-on-your-strengths-and-weaknesses
Thank you!
Headhunters will rarely be honest about this. I always recommend to clients that they say “brutal feedback” instead of just feedback to make sure they’re getting good responses, but it’s the rare manager that will be honest about this.
Thanks tried that. Not sure it worked as I didn’t learn anything concrete. We spent 30 mins in discussion though (which he didn’t need to do as there was no further value he could extract from me).
Oh well, such is life...
If he’s a headhunter than he might value the relationship with you to call you up when he has another job.
Maybe, but I’ve rarely gotten more than one offer from a given headhunter—actually, I’ve gotten multiple offers from one company more often than through one headhunting agency. Reading between the lines, I get the impression that most of them have a library of openings and look in real time for candidates matching them, rarely going into their back catalog.
Multiple offers might be more common for people with less specialized skillsets than mine, though.
This is true… but you should be getting back in touch with the headhunter every three months or so, to make sure you’re in the front of the catalog instead of the back :).
Wetware basis for IQ. Abstract (emphasis mine):
That seems to be pretty worthwhile without them saying how much of the varience they can “predict”.
Look at the figures available outside of the paywall.
You also probably mean “worthless”.
I was just rereading Three Worlds Collide today and noticed that my feelings about the ending have changed over the last few years. It used to be obvious to me that the “status quo” ending was better. Now I feel that the “super happy” ending is better, and it’s not just a matter of feelings—it’s somehow axiomatically better, based on what I know about decision theory.
Namely, the story says that the super happies are smarter and understand humanity’s utility function better, and also that they are moral and wouldn’t offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness). Under these conditions, accepting the deal seems like the right thing to do.
Does the story actually says the Superhappies really know humanity’s utility function better? As in, does an omniscient narrator tell it, or is it a Superhappy or one of the crew that says this? That changes a lot, to me. Of course the Superhappies would believe they know our utility function better than we do. Just like how the humans assumed they knew what was better for the Babyeaters.
Similarly, the Superhappies are moral, for their idea of morality. They were perfectly willing to use force (not physical, but force nonetheless) to encourage humans to see their point of view. They threatened humanity and were willing to forcibly change human children, even if the adults could continue to feel pain. While humans also employs threats and force to change behavior, in most cases we would be hard-pressed to call that “moral.”
From a meta-perspective, I’d findit odd if Yudkowsky wrote it like that. He’s not careless enough to make that mistake and as far as I know, he thinks humanity’s utility function goes beyond mere bliss.
The only way I think you could see the Superhappies’ solution as acceptable if you don’t think jokes or fiction (or other sort of arts involving “deception”) are something humans would value as part of their utility function. Which I personally would find very hard to understand.
Um, that’s the opposite of how utility functions work. They don’t have sacred components. You can and should trade off one component for a larger gain in another component. That’s exactly what the super happies were offering.
What I’m saying is that humans aren’t wrong in trading off some amount of comfort so they can have jokes, fiction, art and romantic love.
What why would this be true? Utility functions don’t have to be linear, it could even be the case that I place no additional utility on happiness beyond a certain level.
True, but the question in the story is whether total cost of suffering > total benefit from being able to suffer. These are the components being traded. When put this way, the question answers itself. The only reason to reply “no” is status quo bias (mentioning sacred components of utility is an example of that). The standard fix for that is the reversal test: do you think the current amount of suffering is coincidentally exactly optimal, or would you prefer to add some more? That test is actually mentioned in the story, the humans apply it to babyeaters, but forget to apply it to themselves.
The answer to this question is “No.”
Some people could use more. Many others could use less.
The question you should ask first is whether being able to suffer is a good thing or a bad thing. You start with the assumption that it is bad, that suffering is bad. You do not sufficiently investigate what the alternative is; you do not sufficiently consider that experience is subjective, and subjectivity requires reference points. To eliminate, in perpetuity, that half of the axis below the current reference point, is to eliminate the axis entirely.
Do you have a proof for this? As far as I know, we have no universally agreed upon way to compare different ways of calculating utility.
There’s no way of calculating utility, period. The issue is more substantively that suffering is relative, and that the elimination of suffering is also the elimination of happiness.
Please explain in more detail. The Buddhist part of my brain just had a spit-take upon reading that.
Happiness and suffering are the same thing—the experience of a divergence from the norm of your well-being, your ground state. They just differ in direction.
A long time ago, I experienced both. For most of my life, I experienced neither—you think pain is a negative experience, I found it to be an -interesting- experience, a diversion from the endless gray. Today, I experience… a very limited degree of both, as a result of gradually accepting that suffering is the cost paid to experience happiness.
Equanimity, as it transpires, isn’t something you can experience only with regard to those things you don’t want to directly experience.
True, the difference is the direction, but surely that counts for something? Pain and pleasure are chemically and neurologically different phenomena. A ground state of “endless gray” is not something you’d really want.
I’m guessing you may be a Roman Catholic. In case you’re not, how did you come to see suffering as having exchange value?
I hope my comments are not taken as offensive. I know I sometimes tend to dramatize my degree of surprise. I genuinely wish to understand your position.
I still don’t experience “pleasure”, at least in sense where I can say, “Yes, that sensation is positive in a way other sensations are not”. At best I can say I experience variety. Pain is just starting to be a negative thing; it’s difficult to accept it as suffering when it was one of the few things that offered any variety at all to my experience for many years. Pain isn’t pleasure, they’re different flavors, but they’re both spices.
This is very true.
I was raised, and remain, an atheist. And exchange value isn’t quite the same thing; it’s more they’re the same variable, but different values. Living for more than a decade without either suffering or happiness, and only starting to experience happiness when I started to allow myself to experience suffering.
I regard suffering and happiness as sums, rather than independent variables; they’re composite emotions, perhaps better modeled as waves, created by summing up one’s current total mindstate. Each is the inverse of the other; being waves, rather than simple linear values, it’s possible to both be suffering and be happy, if one area of one’s life is going well and one area is going poorly. But they’re both invariably tied to one’s norm; if one has had a consistently good life, their life continually to be consistently good isn’t going to provide any happiness, even though the same section of life, transplanted into somebody with a consistently bad life, would provide ecstasy. Likewise, a consistently bad life doesn’t translate into suffering; it’s the particularly bad parts of that life that are experienced as suffering, everything else is experienced as the norm.
This is backed up by studies of self-reported happiness, which tracks a norm, and only rarely [ETA: permanently] deviates from that norm. This norm, this base level of self-reported happiness (which I distinguish from experienced happiness), is the norm from which happiness and suffering are experienced as deviations.
True, but only partially true. The stable base level, as you know, varies. There are people with high-happiness stable level and people with low-happiness stable level. These people look and behave very differently in real life. The high-base people look and behave happy at their neutral setting I don’t see any reason to believe that it’s just outwards manifestations which do not reflect the internal state. The low-base people are, in contrast, much less happy at their neutral setting.
So yes, on the one hand happiness/suffering is relative to your base state; but on the other hand there is an absolute scale as well and high-base people are happier than low-base people.
It’s hard to say what goes on in other people’s heads, but my self-reported happiness would be an assessment of my well-being relative to what I regard as my cultural norm, whereas my experienced happiness is a different value entirely.
I base my belief that this is the norm for humans on the fact that life satisfaction decreases are correlated with suicide rates irrespective of the absolute value of life satisfaction (although certain factors can have an inhibitive effect); that is, wealthy nations, which generally have higher self-reported happiness levels, also have high suicide rates. Their high base level of happiness, if this were the same variable as experienced happiness, should otherwise offset the suffering they experience, which does not appear to happen.
People’s social behavior is more predicated on their perceived relationship to the local/current social group than the state of their internal variables. I don’t base this on any study, but rather personal observation.
I’m not talking about evaluating one’s own internal state. I’m talking about outward signs.
I know both high-base and low-base people from, more or less, the same cultural circles. It’s not that they would answer the question “How happy are you?” differently—I don’t know, I haven’t asked. It’s just that the high-base people smile and laugh a lot, are prone to engaging in spontaneous fun, are generally comfortable with life. And the low-base people tend to have a characteristic disapproving expression on their faces (which will actually mold their face by middle age), whine and grumble a lot, and find life generally unpleasant.
Note that here I’m talking about, basically, long-term averages. In the short term high-base people can and will get unhappy and depressed; low-base people can and will get excited and joyful. But both will revert to the mean—I’m not talking about bipolar people who will oscillate between highs and lows, they are a separate category.
What happened to you during those years? Feel free to decline to answer if I’m being too intrusive.
At the start, I decided that emotions were holding me back, and that logic was the more appropriate path, and so sat down one day and destroyed my emotions.
Over the next few years? I graduated high school, then college, got a couple of low-level jobs, then a real job, which I’ve held since. Dated a few people, role-played a normal person in the course of my interactions with them.
My emotions weren’t completely gone, over this period of time, but rather… remote, happening to somebody else. If they got particularly intense, I could observe my body’s reaction to them—hands clenching in anger, for example—but I didn’t actually experience them. The emotions were there, but the connection to my conscious mind was severed.
At some point in there, I read Atlas Shrugged, which convinced me that emotions were not, in fact, useless distractions from pure logic. I still wasn’t experiencing them, but the absence was no longer desirable; at that point, it was neutral. Everything was neutral, really. That began the gray phase of my life.
I honestly don’t remember much from that period of time. Nothing had any kind of significance. I worked, I dated, read books, played games. None of it particularly mattered; existence was a habit without importance. It wasn’t unpleasant, because unpleasantness would have been something. I was told I was depressed. If I was, if I wasn’t—didn’t particularly matter.
Then I tried LSD. And… I had a day that wasn’t gray. I appreciated the green color of the leaves on trees, the texture of the bark. Shrug So I decided I would prefer to live like that all the time, and started permitting myself to experience life again. Started taking Vitamin D, which kick-started the process.
Which began a rather dark period, as allowing myself to experience required confronting all the suffering I had avoided. The deaths of some people who had been close to me in my youth. An ex-girlfriend raping me, and before that with another partner, my first sexual encounters having been undesired, but my having not refused because I didn’t care enough to. How inherently abusive many of the relationships I was in were, how dysfunctional the situation I had allowed myself to get into was. Admitting to myself that much of the past decade of my life had been a failure.
And then things got better, because the recognition that things were bad was the same as the recognition that things could get better, and so I starting making things better. I got out of the situation, and have started working towards the next phase in my life.
That’s an amazing journey of self-discovery. I, too, had a period where I wanted to erase the parts of me that I found useless, but I didn’t go as deeply Vulcan as you did. (You’re the first person I’ve met who became more sensitive and overall nicer because of Atlas Shrugged.) I’m sorry to hear that you went through so many dark places during your process, and I find your final meditations on the meaning of suffering to be quite inspiring. You have my admiration.
Pain and suffering are not the same thing. One woman will suffer while giving birth while the next doesn’t and enjoys the experience.
I think what the “true” (status-quo) ending proves is that the Super-Happies did not accurately model humanity’s utility function at all. If they had, they would have proposed a deal where humanity gets rid of most of its pain, but still keeps some, especially those “grim” things that humans actually like (somewhat counter-intuitively). (And perhaps the Babyeaters’ thing would then be understood as one of these “grim” things by humans, as it clearly is for the Babyeaters themselves It’s not clear if the Superhappies would be willing to acquire this value, though). This is a deal that humans would indeed accept, since it agrees with their values. I think the true moral of this story is that getting human wants right for something like CEV is a hard problem, and making even small mistakes can have big consequences.
My feeling is that many utility functions in the general class of utility functions that the super happy’s is drawn from would lie about how advantageous it is to merge. Weren’t the humans going to lie to the babyeaters?
But it’s still a compromise. Is it part of humanity’s utility function to value another species’ utility function to such an extent that they would accept the tradeoff of changing humanity’s utility function to preserve as much of the other species’ utility function?
I don’t recall any mention of humanity being total utilitarians in the story. Neither did the compromise made by the superhappies strike me as being better for all parties than their original values were, for each of them.
The only reason the compromise was supposed to be beneficial is because the three species made contact and couldn’t easily coexist together from that point on. Also, because the superhappies were the stronger force and could therefore easily enforce their own solution. Cutting off the link removes those assumptions, and allows each species to preserve its utility function, which I assume they have a preference for, at least humans and baby-eaters.
There was an asymetry in the story, if I remember correctly.
Babyeaters had a preference for other species eating their babies. Humans and superhappies had a preference for other species not eating their babies. This part was symetrical. Superhappies also had a preference for other species never feeling any pain. But humans didn’t have a preference for other species feeling pain; they just wanted to more or less preserve their own biological status quo. They didn’t mind if superhappies remain… superhappy.
This is why cutting the link harms the superhappy utility function more than the human utility function. -- Humans will feel the relief that babyeater children are still saved by superhappies, more quickly and reliably than humans could do. On the other hand, superhappies will know that somewhere in the universe human babies are feeling pain and frustration, and there is nothing the superhappies can do about it.
The asymetry was that superhappies didn’t seem ethically repulsive to humans. Well, apart from what they wanted to do with humans; which was successfully avoided.
In the story the superhappies propose to self-modify to appreciate complex art, not just simple porn, and they say that humans and babyeaters will both think that is an improvement. So to some degree the superhappies (with their very ugly spaceships) are repulsive to humans, although not as strongly repulsive as the babyeaters.
I guess whether it is beneficial or not depends on what you compare to? They say,
So they are aiming for satisficing rather than maximizing utility: according to all three before-the-change moralities, the post-change state of affairs should be acceptable, but not necessarily optimal. Consider these possibilities:
1) Baby-eaters are modified to no longer eat sentient babies; humans are unchanged; Superhappies like art.
2) Baby-eaters are modified to no longer eat sentient babies; humans are pain-free and eat babies; Superhappies like art.
3) Baby-eaters, humans, and Superhappies are all unchanged.
I think the intention of the author is that, according to pre-change human morality, (1) is the optimal choice, (2) is bad but acceptable, and (3) is unacceptable. The superhappies in the story claim that (2) is the only alternative that is acceptable to all three pre-change moralities. So the super-happy ending is beneficial in the sense that it avoids (3), but it’s a “bad” ending because it fails to get (1).
Hmm, I guess I interpreted the super happies proposal differently, as saying that humans get compensation for any downgrade from (1) to (2).
I’m new here. Been lurking occasionally for a few weeks. I have finally signed up. On principle should I avoid voting? (For the time being?)
Feel free to vote!
See http://lesswrong.com/lw/m5l/guidelines_for_upvoting_and_downvoting/ for ‘rules’ about voting.
Thanks!
What reasons would you have for not voting?
I was worried that it may be discouraged. I came from Reddit and most subs seem pretty against non-subscribers voting. I wasn’t sure how that would affect new members here,
If I remember correctly the voting system takes into account your newness, and won’t let you overly downvote without sufficient karma.
I’m contemplating a discussion post on this topic, but first I’ll float it here, since there’s a high chance that I’m just being really stupid.
I’m abysmally unsuccessful at using anything like Bayesian reasoning in real life.
I don’t think it’s because I’m doing anything fundamentally wrong. Maybe what I’m doing wrong is attempting to think of these things in a Bayesian way in the first place.
Let’s use a concrete example. I bought a house. My prior probability that any given household appliance or fixture will break and/or need maintenance in a given month is on the order of 5%, obviously with some variability depending on what appliance we’re talking about. This prior is an off-the-cuff intuitive figure based on decades of living in houses.
Within a month of buying this house, things immediately start breaking. The dishwasher breaks. Then the garbage disposal. The sump pump fails completely. The humidifier needs repair. The air conditioner unit needs to be entirely replaced. The siding needs to be repainted. A section of fence needs to be replaced. The sprinklers don’t work. This is all within roughly the first four months.
So, my prior was garbage, but the real issue for me is that Bayesian reasoning didn’t really help me. The dishwasher breaking didn’t cause me to shift my Background Probabilistic Breakage Rate much at all. One thing breaking within the first month is allowed for by my prior model. Then the second thing breaks—okay, maybe I need to adjust my BPBR a a bit. Still, there’s little reason to expect that several more important things will break in short order. But that’s exactly what happened.
There is a causal story that explains everything (apparently) breaking at basically the same time, which is that the previous owners were not taking good care of the house, and various things were already subtly broken and limping along at passable functionality for a long time. The problem is that this causal story only becomes promoted to “hypothesis with significant probability mass” after two or three consecutive major appliance disasters.
What is annoying about all this is that my wife doesn’t attempt to use any kind of probabilistic reasoning, and she is basically right all the time. I was saying things like, “I really doubt the garbage disposal is really broken, we just had two other major things replaced, what are the odds that another thing would break so quickly?” and she would reply along the lines of, “I’m pretty sure it’s actually broken, and I can’t fathom why you keep talking about odds when your odds-based assessments are always wrong,” and I’m at the point of agreeing with her. Not to mention that she was the one who suggested the “prior owners didn’t maintain the house” hypothesis, while I was still grimly clinging to my initial model, increasingly bewildered by each new disaster.
I am probably a poster child for “doing probabilistic thinking wrong” in some obvious way that I am blind to. Please help me figure out how and where. I have my own thoughts, but I will wait for others to respond so as to avoid anchoring.
I think you were basically doing okay, it’s just that as soon as you formulated your initial hypothesis you should have actively sought out a way to disprove it. How hard can I lean on my fence? Is scratching lazily sufficient to remove paint on the siding? Do I dare to wash the floor under the bookshelf?… After all, if you suddenly received lots of evidence to the contrary, you would a) update fast, and b) earn husband points.
In essence, you should always ask yourself, is this still the relevant question?
Some random thoughts:
As a bounded agent, you have to be aware that it’s physically impossible to consider all the hypotheses. When you encounter new evidence, you might think of a new hypothesis to promote that you hadn’t thought of before—in fact, this is an unavoidable part of being a good bounded agent. So don’t worry about coming up with the One True Prior ahead of time and then updating it—instead, try to plan for the most likely outcomes, but leave a “something else” category and be ready to change your mind.
And given that we’re biased, when we make plans we’re probably going to get some probabilities wrong—in this case, future events contain information about how one was biased. Try to learn about your own biases, which often means being more influenced by evidence than an unbiased agent.
If you still want to try reasoning probabilistically, I’d look into Tetlock’s good judgment project and start planning how to practice my probability estimation. Oh, and check out the calibration game.
You are indeed doing it very wrong. As far as proablisitic reasoning goes the fact that one item broke doesn’t reduce the chances that a second item breaks at all.
Yeah, okay, I worded that stupidly. It’s more like this:
“This 20-sided-die just came up 20 twice in a row. The odds of three consecutive rolls of 20 is 0.0125%. I acknowledge that this next roll has a 1⁄20 chance of coming up 20, assuming the die is fair. However, if this next roll comes up 20, we are witnessing an extremely improbable sequence, so improbable that I have to start considering that the die is loaded.”
The equivalent of “considering that the die is loaded” in your example is “the previous owners did a bad job of maintaining the house”. It’s indeed makes sense to come to that conclusion. That’s also basically what your wife did.
Apart from that the difference between sequences picked by humans to look random and real random data is that real random data more frequently contains such improbable sequences.
The “however” part seems irrelevant.
I mean, regardless of what were the previous two rolls—let’s call them “X” and “Y”—if the next roll comes up 20, we are witnessing a sequence “X, Y, 20”, which has a probability 0.0125%. That’s true even when “X” and “Y” are different than 20.
You could make the sequence even more improbable by saying “if this next roll comes up, we are witnessing an extremely improbably sequence—we are living in a universe whose conditions allow creation of matter, we happen to be on a planet where life exists, dinousaurs were killed by a comet, I decided to roll the 20-sided-die three times, the first two rolls were 20… and now the third roll is also 20? Well this all just seems very very unlikely.”
Or you could decide that the past is fixed, if you happen to be in some branch of the universe you are already there, and you are only going to estimate the probability of future events.
Even better, what ChristianKl said. A better model would be that depending on the existing state of the house there is a probability P saying how frequently things will break. At the beginning there is some prior distribution of P, but when things start breaking too fast, you should update that P is probably greater than you originally thought… and now you should expect things to break faster than you expected originally.
Yes, all sequences X,Y,Z are equally (im)probable if the d20 is a fair one. But some sequences—in particular those with X=Y=Z, and in more-particular those with X=Y=Z=1 or X=Y=Z=20, are more likely if the die is unfair because they’re relatively easy and/or relatively useful/amusing for a die-fixer to induce.
As you consider longer and longer sequences 20,20,20,… their probability conditional on a fair d20 goes down rapidly, whereas their probability conditional on a dishonest d20 goes down much less rapidly because there’s some nonzero chance that someone’s made a d20 that almost always rolls 20s.
separate advice: look around at things and check if anything else is about to break and can be saved from expensive replacement via the process of repairs.
One possible mistake is assuming that problems will be independent and spread out evenly over time. That’s an extreme assumption. In real life there are always more reasons for problems to cluster than to anti-cluster (so to speak), so it doesn’t balance out at all. Also, problems will do more harm when clustered, because your ability to cope is reduced. So it makes sense to prepare for clustered problems. When two things go wrong, get ready for the third. That’s very obvious in software engineering, if you find ten bugs, chances are you haven’t found them all. But it’s true in real life too.
The more general problem is that you just seem to have less life experience than your wife. To fix that, go out and get experience. Fix stuff, haggle, make arrangements… It’ll improve your life in other ways as well.
You have two hypotheses: the appliances breaking are not connected (independent); and the appliance breaking are connected (dependent).
In the first case you are saying the equivalent of “I tossed the coin twice and it came up heads both times, it’s really unlikely it will come up heads the third time as well” which should be obviously wrong.
In the second case you should discard your model of independence alongside with your original prior and consider that the breakages are connected.
I think the moral of the story is that life is complicated and simple models are often too simple to be useful. You should discard them faster when they show signs of not working.
And, of course, if you are wondering whether your garbage disposal is really broken, you should go look at your garbage disposal unit and not engage in pondering theoretical considerations.
See my response to ChristianKl below for my clarification on my reasoning about “consecutive coin flips” which could still be wrong but is hopefully less wrong than my original wording.
I agree that I should have discarded my model more quickly, but I don’t quite see how to generalize that observation. Sometimes the alternative hypothesis (e.g. the breakages are connected) is not apparent or obvious without more data—and the process of collecting data really just means continuing to make bad predictions as you go through life until something clicks and you notice the underlying structure.
My wife seems to think that making explicit model-based predictions in the first place is the problem. I have a lot of respect for System 1 and am sympathetic to this view. But System 2 really shouldn’t actively lead me astray.
Yes, and note that this part—“that I have to start considering that the die is loaded”—is key.
Um, directly? All models which you are considering are much simpler than the real world. The relevant maxim is “All models are wrong, but some are useful”.
I think you got caught in the trap of “but I can’t change my prior because priors are not supposed to be changed”. That’s not exactly true. You can and (given sufficient evidence) should be willing to discard your entire model and the prior with it. Priors only make sense within a specified set of hypotheses. If your set of hypotheses changes, the old prior goes out of the window.
The naive Bayes approach sweeps a lot of complexity under the rug (e.g. hypotheses selection) which will bite you in the ass given the slightest opportunity.
Yeah, well, welcome to the real world :-/
She is correct if your models are wrong. Getting right models is hard and you should not assume that the first model you came up with is going to be sufficiently correct to be useful.
I see absolutely no basis for this belief. To misquote someone from memory: “Logic is just a way of making errors with confidence” :-P
If they’re independent, the odds are exactly the same as if two other major things had not been recently replaced. If they’re dependent, the odds are higher. For this decision, you have a lot more evidence about the specific, so your base rate (unless it’s incredibly small) doesn’t matter, and the specific evidence of brokenness overwhelms it.
A better application is budgeting for next month. Compare how much you’re planning to set aside for repairs with how much your wife is. See who’s right. Update. Repeat.
I can’t help but notice, in an slightly off-topic fugue, that the dishwasher, the garbage disposal, and probably the sump pump share a drainage system. You may wish to consider the possibility that these are not independent breakages, and that until you fix the underlying problem, you should expect further breakages (i.e., check you drains).
Also, the siding needing to be repainted and a section of fence needing to be replaced doesn’t really sound like “things breaking” (I could be wrong). Could you have been ignoring some important information right from the start?
There’s your problem right there. Note, that this prior effectively assigns zero probability to the “prior owner didn’t maintain the house” hypothesis.
What you should have done is assign some (non-zero) probability to that hypothesis, then when something breaks, one would updates towards the “poor maintenance” hypothesis.
Hi! Semi-new lurker here. What is the current etiquette on necroing? I didn’t find any official ettiquette guide.
Necro to your heart’s content. It’s fine.
Feel free to comment—since only the user you’re replying to (and anyone that has chosen to subscribe to updates for that specific post) is notified, you don’t need to fear being a distraction to masses of people who might no longer care.
You might consider clicking on the username. The second number shows karma in last 30 days and if it is 0 you might not get answers.
That’s a pretty good heuristic. OTOH, up until this week, my karma in the last 30 days was 0. Now that I’m starting the sequences soon (in the form of “Rationality: From AI to Zombies”), I suspect I’ll involve myself in the community some more. Then again, my account didn’t functionally exist until recently, mainly being there for the purpose of reserving the name.
It does also show up on the Recent Comments view, which is one of the most common ways for people to jump into discussions. So it’ll be noticed by other people as well. (Which is good, if they want to also chime in.)
See http://lesswrong.com/lw/le5/welcome_to_less_wrong_7th_thread_december_2014/ it has further points.
http://arxiv.org/abs/1412.0348
Calculating Levenshtein distance may be unoptimizable.
What are the consequences?
I guess it is a bad news for bioinformatics (comparing two very long pieces of DNA), but maybe there are sufficiently useful approximations. Or if one string is fixed and only the other string varies, maybe you can precompute some data to make the comparison faster.
I don’t think the Levensthtein distance between two chromosomes is useful. If a gene changes location it’s still for practical purposes mostly the same gene but the Levensthtein distance is very different.
Mulling the Fermi paradox and escape velocity—the higher a species’ home planet’s escape velocity, the harder it is to get off the planet. I think there’s an escape velocity which is so high that chemical rocket fuels just don’t have enough energy.
I have no idea whether there’s a plausible relationship between the likelihood of technological species and the escape velocity of their planet, except that I doubt that there’d be intelligent life on planets without atmosphere. Or am I being too parochial?
Thoughts about technological species and escape velocity?
Highly speculative thoughts off the top of my head (only with what little I can remember from my high school physics):
The main factor that determines escape velocity is the mass of the planet (there’s also atmospheric drag, but it’s generally manageable unless the world is a perpetual hurricane hell, in which case I doubt it has any civilization). After a certain mass threshold, the planet is likelier to be gaseous than rocky. I don’t think Neptune-like or Jupiter-like worlds are suitable for life (but their moons are another story). In general, I’d say if the world is too big to jump out of, it’s too gaseous for anything to have walked on it anyway. Edited to add: Inhabited moons of Jupiter-like worlds would also need to take into account the planet’s escape velocity, even if it’s lower where they are.
If the planet is a big Earth (that is, quite massive but still mostly rocky), the greater gravity will result in a thicker and denser atmosphere, but I don’t know enough aerodynamics to tell how much, if any, this detail will add to the problem of escape velocity. But this difference may change the rules as to which fuels will be solid, liquid or gaseous under that planet’s normal conditions.
Another, related problem is payload. For example, if the planet’s intelligent species is aquatic, the spaceship will need to be filled with water instead of air; this will increase the total mass horribly and require a much more potent fuel (but all this is assuming that an aquatic species has had the opportunity to discover fire in the first place).
In worlds too big to escape by propulsion, people may come up with the idea of the space elevator, but the extra gravity will require taking into account the structure’s weight. The counterweight at the upper end will need to be heavier and/or farther. Issues related to which material is best suited for this building scenario and whether there’s a limit to how big a space elevator you can build are beyond my knowledge. According to Wikipedia, nanotubes appear to be a workable choice on Earth.
Some world out there may have a ridiculously tall mountain that extends into the upper atmosphere. Gravity at the top will be lower, and if a launch platform can be built there, takeoff will be easier. Of course, this is an “if” bigger than said mountain.
India has a huge coastline, but for mythical/cultural reasons, Hinduism used to have a taboo against sea travel. In the worst scenario, our heavy aliens may stay on ground, not because they can’t, but because they won’t; maybe their atmosphere looks too scary or their planet attracts too many meteorites or it has several ominous-looking moons or something.
Thank you. I’m also interested in planets with less mass/lower escape velocity and non-chemical fuel methods. Atomic or nuclear fuel? Laser launch?
The smallest planet you can probably maintain an atmosphere on for gigayears of time is probably half to a third of an earth mass (barring the effects of geology). That gives you an escape velocity between 70 and 80 % that of here given similar composition and no thousand km thick hot ice layers or anything.
EDIT: If you assume an escape velocity of Earth’s and a specific impulse similar to a Merlin engine and ignore all gravity drag and atmosphere, using the rocket equation an SSTO to LEO requires a fuel to payload+structure mass ratio of at least 12.0. If you assume an escape velocity of 75% that of Earth, it requires a mass ratio of at least 6.5. Probably doubles your mass to orbit per unit fuel. If you have an escape velocity of 1.25x that of Earth, your SSTO requires a mass ratio of 22.4. Mars, by comparison, reads as a mass ratio of 3.1 under these optimistic assumptions.
Of course staging improves all of these numbers and squishes them together some, as does using better fuel than kerosine, while dealing with an atmosphere and gravity drag and propellants worse than kerosine makes things much worse. For a reality check, existing real multistage Earthly launch systems I just quickly looked up have mass ratios between ~35 and ~15 (though the 15 includes the total mass of the space shuttle not just the payload, while the upper stage is not included in other higher numbers for other systems).
Assuming an advanced civilization, the main limiting factor for the viable commercial use of nuclear energy would be the abundance of radioactive elements in the planet. During the formation of the planet, its mass will have an effect on which elements get captured. Unfortunately, Wikipedia isn’t helpful on the specifics of planet mass vs. planet composition, but we know it depends on the composition of the protoplanetary nebula, which depends on the type of star. Too many factors.
Nitpick: It wouldn’t have to be commercial use of nuclear energy. Even if we’re limited to human institutions, it could be governmental use, and I have a notion that religion might be the best sort of institution for getting people off the planet. Religions have a potential for big, long term projects that don’t make practical sense.
Thanks for looking into the question of planetary mass and getting off the planet—once the question occurred to me, it exploded into a lot of additional questions, and we haven’t even gotten to whether planetary mass might have an effect on the evolution of life.
One additional factor: the amount of radioactive elements still usable (that is, not completely decayed) vs. how many billion years it took to evolve from alien amoeba to alien tool-users.
Giant capacitor plates and you suddenly remove the insulation?
Good analysis! A few remarks:
In practice even for a planet with as thin an atmosphere as Earth, getting past the atmosphere is more difficult than actually reaching escape velocity. One of the most common times for a rocket to break up is near Max Q which is where maximum aerodynamic stress occurs. This is generally in the range of about 10 km to 20 km up.
Getting enough mass up there to build a space elevator is itself a very tough problem.
Whether gravity is stronger or weaker on top of a mountain is surprisingly complicated and depends a lot on the individual planet’s makeup. However, at least on Earth-like planets it is weaker. See here. Note though that if a planet is really massive it is less likely to have large mountains. You can more easily get large mountains when a planet is small. (e.g. Olympus Mons on Mars).
This would require everyone on the planet to take this same attitude. This seems unlikely to be common.
You got me curious, and I read a bit more, and found this on Wikipedia:
In lay terms, I guess this means that, unlike a cannon ball, which only gets one initial “push”, a rocket is being “pushed” continually and thus doesn’t need to worry about escape velocity.
So first they get the rocket high enough to be safe from the air, and then they speed it up.
Constant density planet: Escape velocity scales with the cube root of mass.
Real planets: Goes up faster than that since the inside crunches down as mass increases. Also the geology could start getting… interesting at large masses due to the whole square cube law thing and rapidly increasing primordial heat of formation.
Just found out: the “Realistic World Building” section of this article covers many of the topics you mention.
Thank you.
I don’t think “harder to get off the planet” means more than “spend an additional 1000 years” developing tech.
Blowing the whistle on the uc berkeley mathematics department
Some people disagree with his version of events.
Yep.
Bureaucracies chew up and spit out people who deviate from norms. You apparently think that you are a better teacher. How relevant is that to your success in the bureaucracy? Is it necessarily beneficial? Do your students get a vote on whether you get tenure? Get a raise? Get a lab?
Some people at work work on the purported purpose of the bureaucracy Others work the bureaucratic reward and punishment system.
It’s also worth pointing out that conflicting institutional loyalties are a huge source of conflict. The “standard” practice in organizations is to collude with your direct management against their management—do things that favor your boss over your boss’s boss. Coward is doing things the ‘honest’ way, favoring his boss’s boss (i.e. the university as a whole) instead of his boss (the math department), which leads to both the conflict and his expectation that he’ll get support by making an ‘internal affair’ public.
But, of course, that also means he has lots of ready-made allies, regardless of the facts on the ground. We’ll see how this shakes out when more voices and details are added.
Favoring the “goals” of the organization as an abstraction over the actual punishment/reward structure of the living, breathing, and interacting cogs of the organization.
I’ve come to look at bureaucracies as parasites on the host organization.
Aligning the goals of the bureaucracy with the goals of the org is actually a very hard, very interesting, and very important problem.
Strange. Tenured professors get paid the same regardless of how many students they teach so it helps them if another instructor attracts lots of students thereby reducing the tenured professor’s teaching burden.
In the short term, yes. In the long term, no, especially for ‘support’ departments. At most large state schools, engineering is king, and physics and math are both subsidized by engineering because they need a sufficient number of professors to teach non-major physics and math classes to engineers. This isn’t to say that there’d be no math or physics without engineering, but that there would be less positions for math and physics faculty.
The math and physics departments, typically, insist on being research faculty, i.e. independent departments subsidized by the university as a whole, rather than pure service organizations. Coward, as a full-time lecturer, is in the ‘pure service’ role, and as one would expect the guy that’s specialized towards teaching does a much better job of teaching than the people specialized towards research. This is good for the engineering department but bad for the math department—instead of eight professors all teaching one non-major course each, you could have two lecturers teaching four non-major courses each, with the attendant loss of prestige, funding, and political clout for the department.
So his characterization of the department’s approach to him as “you’re making us look bad” seems probable to me, especially if the math department has been playing the “our job is hard, you need to fund us more so we can do better” card.
This seems strange to me. Engineering departments should have faculty that are perfectly capable of teaching the math and physics that their students will need. And this happens to a limited extent. For example, at UC Berkeley, the computer science department offers its own discrete math course instead of telling students to take the roughly equivalent discrete math course offered by the math department. Is there something preventing this from becoming more widespread?
Would the university really stop subsidizing Math and Physics dept’s to the same degree if it weren’t for their “service” obligations? I don’t think this is right—I think the administration is broadly happy with the status quo, in terms of prestige, etc. If the department has two full-time lecturers, the only consequence is that they will also hire a bunch of full-time researchers to balance things out. By contrast, the “service” role is probably a lot more important politically for departments which teach lots of fluffy GenEd courses.
Are we sure this man is telling the truth?
As a more general observation, it’s hard to comment on this row without having some idea about the local office politics. These, of course, tend to be dominated by jockeying for power/status/prestige and not by discussions of effective teaching methods.
The political fault lines he’s describing exist at every flagship state public university, and so I’m not at all surprised to hear that a quake has happened along those lines at Berkeley.
But also most performers have a flair for the dramatic, and Coward’s excellent student reviews seem to come in part from his talent at performance. So his interpretations are likely massaged in some form, and the object-level claims could be easily exaggerated.
But he claims that he and the department differ on a fairly simple statistical claim—how to estimate the effect of his courses on students’ future performance. The related email correspondence is here, and well worth reading, both to judge that specific matter yourself, and get a sense of how defensive Coward can seem. (He’s definitely escalating emotionally, but justifiably is harder to know.)
My summary: In a report, Stark, a statistician, makes a three-way comparison between the three 1A classes (two of which were taught by Coward), and finds that they are not statistically significantly different. Coward asks why a three-way comparison is done, instead of comparing the Coward group to the non-Coward group. Stark replies that since the students were assigned non-randomly, we can’t separate the direct effect of instruction from any confounding variables.
Which is, of course, correct—it’s very likely that the students who got into the class with the instructor widely believed to be superior by students are more competent than the students who didn’t, and so should be expected to do better in future classes—but an equally valid point against the three-way comparison.
What I expect: even if we find a naturally randomized subset of students (maybe they are forced into certain sections only due to scheduling conflicts), or even if we find things to adjust for, we will find no significant effect. It’s nothing about Coward himself, it’s just hard to find effects.
But I don’t know if UC uses that sort of reasoning anyways to figure out which contracts to renew, I think adjuncts are super mistreated in general. I often defend academia on LW, but I think the tenure-track/adjunct system is super dysfunctional and awful.
Schools comparable to Berkeley have one of three common organizations of math teachers. One, Berkeley’s old structure, is to employ no lecturers. Another is to employ a lot of lecturers, whose job is simply to teach as well as possible.
But I think the most common organization is to employ a small number of lecturers who do a small amount of teaching, but whose real job is to handle the administrative details of teaching, such as placement of freshmen, curriculum design, and instructing graduate students in teaching. I think the complaints make most sense in the context of the department expecting him to grow into such a job.
“Align more with department standards” sounds like shorthand for some more specific concerns. Coward doesn’t spell out what those concerns are.
Recently I sent a message to an old friend who had stopped talking to me a while ago. I asked if he was done ignoring me and he said something along the lines of ‘you’re temperamental, clearly delusional and gullible which is something I can live without’. Now, I was wondering about how I could improve on with respect to their impressions socially, since I am doing well currently in managing them with respect to personal wellbeing. I’d like to step past how his comments are hurtful, and recognise better how my behaviour may have hurt and continues to hurt people I know, and what I can do to improve. All tips welcome.
“Are you done ignoring me?” is attributing a bad motive to normal human behavior (people lose contact with old friends on a pretty regular basis). So that’s a very bad way to start such a conversation, and may indicate something about why he responded the way that he did.
I guess that you personally would profit from more filtering of your thoughts before you express them to other people. On LW you could easily have a higher positive karma rating than 53% by thinking more about how other people are likely to receive your posts. LW karma isn’t perfect but it’s an easy signal you can use as feedback.
When it comes to face to face interaction I think high feedback workshops are good. I would avoid PUA style training that centers around antagonistic interactions. If you want to speak without much filtering Radical Honesty workshops and Authentic Relating/Circling workshops can help you to communicate in a socially acceptable way.
Don’t generalize from a sample of one. You should pay attention to interactions on a moment to moment basis and keep track of outcomes. If you do find that people start to glaze over when you start talking about alien abductions, you might hypothesize that “delusional and gullible” is something that multiple people would agree to (or, alternatively, that it is boring subject, which is also useful information). If people seem surprised when you express annoyance, this may indicate that they would agree that you are temperamental. If you don’t see this when interacting with other people, it is possible that it is your old friend who is actually temperamental and delusional.
Perhaps I’m misinterpreting you, but I read the above to be asking, “How can I improve at x even though improving at x won’t increase my wellbeing?”
Have you tried asking him why he thought these things?
I asked him prompted by this suggestion:
I asked him why he believes that and he has now responding to say he’s been telling me for years why and it just goes right over my head and he doesn’t want to be part of that anymore.
So not really sure why...
The Flash player for the video of Max Tegmark and Nick Bostrom speaking at the UN is super annoying. Anyone know how to extract the raw video file so I can watch it more conveniently? Thanks!
For many purposes, but especially for video, it is useful to pretend to be an iphone. Just set your user-agent to iphone and it will give you rather than flash. That’s not as good as actually getting the video file. If you want to do that, start here.
Added: I’m using a Mac, using Safari, which is basically the same web browser as on an iphone, so pretending to be an iphone works great for me. Also, Safari has a user-agent switcher built-in, in the Developer menu, which can be turned on in the Advanced tab of Preferences. I have not tried a user-agent switcher in Chrome, and maybe that would work. But I have failed to get the video to play directly in Chrome, so maybe this is an Apple streaming format that Safari implements and Chrome doesn’t. In that case the flash player has an important role of implementing it.
Looks like you can just aim youtube-dl at the URL and it’ll start downloading.
I’ve lost my curiosity. I have noticed that over the course of the last year, I have become significantly less curious. I no longer feel the need to know anything unless I need it, I don’t understand how it is possible to desire knowledge for the sake of knowledge (even though the past me definitely did), I generally find myself unable to empathize with knowledge-seekers and the virtue of curiosity. That worries me a lot, because if you asked me two years earlier, I would have named curiosity as my main characteristic and the desire for knowledge my main driving force. Thinking over the last year, I can’t remember any life-changing experiences that would have warranted the change. May it have been the foods I ate, or some neurological damage? I would have attributed it to brain aging, if I weren’t twenty. What happened? How to reverse it? I find it crippling.
Listen to yourself. You want to know what happened to you. You’re still a curious person.
Even if you don’t feel like you want to learn in general, you want to want to learn. You’re on the path to switching from undirected to directed, from chaotic to purposeful curiosity. You already know how to pursue a question; now you need to find what questions matter to you.
The source of my wanting is conscience rather than passion, though. It’s a completely different thing, and learning is a tiring activity which importance I realize, rather than something that empowers me or something I look forward to. That’s the problem.
You could be depressed.
I don’t feel depressed at all. In the contrary, I am quite motivated, agitated and sort of happy.
I’ve felt that lack of curiosity a fair amount over the past 5-10 years. I suspect the biggest change that reduced my curiosity was becoming financially secure. Or maybe some other changes which made me feel more secure.
I doubt that I ever sought knowledge for the sake of knowledge, even when it felt like I was doing that. It seems more plausible that I had hidden motives such as the desire to impress people with the breadth or sophistication of my knowledge.
LessWrong attitudes toward politics may have reduced some aspects of my curiosity by making it clear that my curiosity in many areas had been motivated by a desire to signal tribal membership. That hasn’t enabled me to redirect curiosity toward more productive areas, but I’m probably better off without those aspects of curiosity.
I am definitely not better off without what I lost. Genuine curiosity had tremendously powerful effect on my learning.
consider: exploration/exploitation. Maybe some part of you has decided that it’s time to stop exploring education and its time to exploit the knowledge you already have? Do you feel like you have a lot of knowledge now? Or that you know enough? Is your relationship to knowledge seeking now in the form of “disinterest”, “too busy for it”, “sick of it” or some other sentiment...
(also as Artaxerxes said—depression, or other brain chemical things that this could be a symptom of)
In our college, students of the first four years were rumoured to be going through the exploration phase, and then—satiety and exploitation. It certainly felt that way to me, and anecdotally a person a year younger, but of course it might be just because of specific curriculum structure. (I am a botanist.)
No, I definitely didn’t learn everything I think I need. I am very much in need to learn a lot of things, desperately, in fact.
I still pursue knowledge from pragmatic standpoint. “This is useful, this is not, therefore I need to learn this and can completely disregard that”. There is just no “drive” in it, no genuine force of curiosity that used to be so motivating. From pragmatic standpoint, my ability to learn suffered a great hit.
Have you tried to look at any new areas recently? Perhaps you are getting kind of “bored” by the repetition.
Sort of yes. Maybe not sufficiently new. I shall look into it.
I had to consciously make myself read articles on the topic of my PhD topic (and not unrelated stuff, so much more interesting), so you just might be lucky! Or even if you don’t think so, you can use this property, at least.
I’m rather frustrated that there’s not a guide to being generally healthier that uses probabilities and payoffs and such to convince readers that they should bother to do any specific activity, or adopt any specific intervention to make themselves healthier. Health information is so disorganized—which is fine for the cutting edge stuff, but for stuff that many people get that we’ve known how to treat for a while, such as cavities, acid reflux, and so on, I feel like it should be way the buck easier to find detailed info on how much certain activities increase or decrease your risk of getting that problem by, and what the base rate is.
For example, a week ago, I would have guessed that maybe 5% of adults in the US had ever had a cavity, but a quick Google search suggests that the actual number is closer to 95%. I’ve gone from rarely flossing to flossing daily since finding this out!
Agreed. I really wish that there was a site like webMD that actually included rates of the diseases and the symptoms. I don’t think it would be a big step to go from there to something that would actually propose cost-effective tests for you based on your symptoms.
e.g. You select sore-throat and fever as symptoms and it says that out of people with those symptoms, 70% have a cold, 25% have a strep infection and 5% have something else (these numbers are completely made up). An even better system would then look at which tests you could do to better nail down the probabilities, which could be as simple as asking some questions like “Do you have any visible rashes?” or asking for test results like a quick strep test.
There is a not insurmountable but a pretty large problem here. Rates for which groups? There are a LOT of relevant subgroups (sex, age, ethnicity, social group, geographic group, current medical conditions, previous medical conditions, diet, etc.).
Medical diagnostic expert systems exist and do reasonably well, but they are not trivial.
On a practical note, the doctors’ guild is likely to take a luddite position towards this X-/
There is a not insurmountable but a pretty large problem here. Rates for which groups? There are a LOT of relevant subgroups (sex, age, ethnicity, social group, geographic group, current medical conditions, previous medical conditions, diet, etc.).
If you just want an overall picture, CDC publishes mortality and morbidity tables, I believe, which should supply you with some sort of base rates.
Medical diagnostic expert systems exist and do reasonably well, but they are not trivial.
On a practical note, the doctors’ guild is likely to take a luddite position towards this X-/
Same with knowhow about how society actually runs. School should tell you how to use social services and a lot of basic law.
That is why we have the stupid questions thread and the boring advice repository.
I’ve been reading about the difficult problem of building an intelligent agent A that can prove a more intelligent version of itself, A’, will behave according to A’s values. It made me start wondering: what does it mean when a person “proves” something to themselves or others? Is it the mental state change that’s important? The external manipulation of symbols?
Proof, in this case, means that using only a restricted set of rules, you are able to rewrite a set of initial assumptions to get the desired conclusion. The rules are supposed to conserve, every time they are used, the truth status of the assertions they are applied to.
In this case, if the derivation is correct and both agents believe in the same environment logic, then the mental state change should be a consequence of the strict symbols manipulation. Note that ‘two agents’ might mean ‘the same agent in the past and in the future of the derivation’.
Link: Maybe You Don’t Need 8 Hours of Sleep After All
Three hunter-gatherer and hunter-farmer groups—the Hadza in Tanzania, San in Namibia, and Tsimane in Bolivia, who live roughly the same lifestyle humans did in the Paleolithic were observed and it was concluded that our ancient ancestors may not have slept nearly as much we thought—despite being healthy.
Any ideas why these tribes might need less sleep?
Full Article
It looks good, although only two groups were sampled. It is worth noting that the “sleep period” was from 6.9 to 8.5 hr—that is, while they were resting in bed.
The article is pretty clear on why this might be the case:
“In these societies, electricity and its associated lighting and entertainment distractions are absent, as are cooling and heating systems. Individuals are exposed, from birth, to sunlight and a continuous seasonal and daily variation in temperature within the thermoneutral range for much of the daylight period, but above thermoneutral temperatures in the afternoon and below thermoneutrality at night.”
“The Tsimane and San live far enough south of the equator to have substantial seasonal changes in day length and temperature.”
“Because we noticed that the Hadza, Tsimane, and San did not initiate sleep at sunset and that their sleep was confined to the latter portion of the dark period, we investigated the role of temperature. We found that the nocturnal sleep period in the Hadza was always initiated during a period of falling ambient temperature, and we saw a similar pattern in the Tsimane. Therefore, we precisely measured ambient temperature at the sleeping sites along with finger temperature and abdominal temperature in our studies of the San. Figures 4 and S1 show that sleep in both the winter and summer occurred during the period of decreasing ambient temperature and that wake onset occurred near the nadir of the daily temperature rhythm. A strong vasoconstriction occurred at wake onset in both summer and winter, presumably functioning to aid thermogenesis in raising the brain and core temperature for waking activity. ”
Edit: I don’t know how to link an article with parentheses in the URL.
Edit edit: now I do. Thank you, Gunnar_Zarncke.
I should add a TLDR for LessWrongers interested in sleep patterns: if you are having trouble sleeping, you should consider temperature as a variable, perhaps more so than light. Napping was not a significant factor, but was present. Segmented sleep was not observed in this study. Sleep times were longer in the winter than in the summer by an average of 53 minutes.
Thank you very much for the explanation.
You can escape the brackets by replacing with with %28 and %29.
I don’t have enough trouble with my own sleep to make the experiment very useful or decisive, but now that the nights are getting colder, it would be interesting to see what would happen if some LessWrongers experimented with space heaters on timers; set them to go off about 15 minutes before you want to wake up, and see it helps.
At least two major classes of existential risk, AI and physics experiments, are areas where a lot of math can come into play. In the case of AI, this is understanding whether hard take-offs are possible or likely and whether an AI can be provably Friendly. In the case of physics experiments, the issues connected to are analysis that the experiments are safe.
In both these cases, little attention is made to the precise axiomatic system being used for the results. Should this be concerning? If for example some sort of result about Friendliness is proven rigorously, but the proof lives in ZFC set theory, then there’s the risk that ZFC may turn out to be inconsistent. Similar remarks apply to analysis that various physics experiments are unlikely to cause serious problems like a false vacuum collapse.
In this context, should more resources be spent on making sure that proofs occur in their absolute minimum axiomatic systems, such as conservative extensions of Peano Arithmetic or near conservative extension?
I would think it faster to search for proofs of any kind, then simplify to an elementary/constructive/machine verifiable proof.
What do you mean?
If you’re at the state where the worst thing about a proof is that it relies on the axiom of choice, you’re practically at the finish line (at least compared to here). Once proofs has been discovered, mathematicians have a pretty good track record of whittling them down to rest on fewer assumptions. From my (uninformed dilettante’s) perspective, it’s not worth limiting your toolset until you’ve found some solution to your problem. Any solution, even ones which rest on unproven conjectures, will teach you a lot.
Ah, yes, I think that makes sense. And obviously a proof of say Friendliness in ZFC is a lot better than no proof at all.
One of the open problems MIRI is working on for FAI is exactly this type of logical uncertainty. It should be able to modify itself if it finds out the logic underlying it’s basic programming is incorrect.
Not that it counts much, but I do believe that the ZFC is inconsistent.
Why do you believe that? And do you also believe that ZF is inconsistent?
Yes. It’s not the Choice axiom which is problematic, but the infinity itself. So it doesn’t mater if ZF or ZFC.
Why do I believe this? It’s known for some time now, that you can’t have an uniform probability distribution over the set of all naturals. That would be an express road to paradoxes.
The problem is, that even if you have a probability distribution where P(0)=0.5, P(1)=0.25, P(2)=0.125 and so on … you can then invite a super-task of swapping two random naturals (using this distribution) at the time 0. Then the next swapping at 0.5. Then the next swapping at 0.75 … and so on.
The question is, what is the probability that 0 will remain in its place? It can’t be more than 0, after the completion of the super-task after just a second. On the other hand, for every other number, that probability of being on the leftmost position is also zero.
We apparently can construct an uniform distribution over the naturals. Which is bad.
The limit of your distributions is not a distribution so there’s no problem.
If there’s any sort of inconsistency in ZF or PA or any other major system currently in use, it will be much harder to find than this. At a meta level, if there were this basic a problem, don’t you think it would have already been noticed?
Indeed, since you can prove ZFC consistent with the aid of an inaccessible cardinal. And you can prove the consistency of an inaccessible cardinal with a Mahlo cardinal, and so on.
I’m not sure that’s strong evidence for the thesis in question. If ZFC had a low-lying inconsistency, ZFC+an inaccessible cardinal would still prove ZFC consistent, but it would be itself an inconsistent system that was effectively lying to you. Same remarks apply to any large cardinal axiom.
What can one expect after this super-task is done to see?
Nothing?
It has been noticed, but never resolved properly. A consensus among top mathematicians, that everything is/must be okay prevails.
One dissident.
https://www.youtube.com/watch?t=27&v=4DNlEq0ZrTo
This question presupposes that the task will ever be done. Since, if I understand correctly, you are doing an infinite number of swaps, you will never be done.
You could similarly define a super-task (whatever that is) of adding 1 to a number. Start with 0, at time 0 add 1, add one more at time 0.5, and again at 0.75. What is the value when you are done? Clearly you are counting to infinity, so even though you started with a natural number, you don’t end up with one. That is because you don’t “end up” at all.
Sure. It’s called super-tasks.
https://en.wikipedia.org/wiki/Supertask
“a supertask is a countably infinite sequence of operations that occur sequentially within a finite interval of time.”
You can’t avoid supertasks, when you endorse infinity.
Therefore, I don’t.
What you are doing in many ways amounts to the 18th and early 19th century arguments over whether 1-1+1-1+1-1… converged and if so to what. First formalize what you mean, and then get an answer. And a rough intuition of what should formally work that leads to a problem is not at all the same thing as an inconsistency in either PA or ZFC.
There are no axioms of ZFC that imply that such a task can be completed.
From mathematics we know that not all sequences converge. So the sequence of distributions that you gave, or my example of the sequence 0,1,2,3,4,… both don’t converge. Calling them a supertask doesn’t change that fact.
What mathematicians often do in such cases is to define a new object to denote the hypothetical value at the end of sequence. This is how you end up with real numbers, distributions (generalized functions), etc. To be fully formal you would have to keep track of the sequence itself, which for real numbers gives you Cauchy sequences for instance. In most cases these objects behave a lot like the elements of the sequence, so real numbers are a lot like rational numbers. But not always, and sometimes there is some weirdness.
From the wikipedia link:
This refers to something called “time”. Most of mathematics, ZFC included, has no notion of time. Now, you could take a variable, and call it time. And you can say that a given countably infinite sequences “takes place” in finite “time”. But that is just you putting semantics on this sequence and this variable.
I don’t understand you.
Your question of “after finishing the supertask, what is the probability that 0 stays in place” doesn’t yet parse as a question in ZFC, because you haven’t specified what is meant by “after finishing the supertask”. You need to formalize this notion before we can say anything about it.
If you’re saying that there is no formalization you know of that makes sense in ZFC, then that’s fine, but that’s not necessarily a strike against ZFC unless you have a competitive alternative you’re offering. The problem could just be that it’s an ill-defined concept to begin with, or you just haven’t found a good formalization. Just because your brain says “that sounds like it make sense”, doesn’t mean it actually makes sense.
To show that ZFC is inconsistent, you would need to display a formal contradiction deduced from the ZFC axioms. “I can’t write down a formalization of this natural sounding concept” isn’t a formal contradiction; the failure is at the modeling step, not inside the logical calculus.
Define the sequence S by
This is a sequence of natural numbers. This sequence does not converge, which means that the limit as n goes to infinite of S(n) is not a natural number (nor a real number for that matter).
You could try to write it as a function of time, S’(t) such that S’(1-0.5^n) = S(n). That is, S’(0)=0, S’(0.5)=1, S’(0.75)=2, etc. A possible formula is S’(t) = -log_2(1-t). You could then ask what is S’(1). The answer is that this is the same as the limit S(infinity), or as log(0), which are both not defined. So in fact S’ is not a function from numbers between 0 and 1 inclusive to natural or real numbers, since the domain excludes 1.
You can similarly define a sequence of distributions over the natural numbers by
This is the example that you gave above. The sequence T(n) doesn’t converge (I haven’t checked, but the discussion above suggests that it doesn’t), meaning that the limit “lim_{n->inf} T(n)” is not defined.
Thomas, please read and understand query’s response above. In attempting to dismantle a concept you don’t like, you’ve lost precision. Formalize your questions and concerns rigorously and then see if a seeming contradiction is still there.
Phrasing it as a “super-task” relies on intuitions that are not easily formalized in either PA or ZFC. Think instead in terms of a limit, where your nth distribution and let n go to infinity. This avoids the intuitive issues. Then just ask what mean by the limit. You are taking what amounts to a pointwise limit. At this point, what matters then is that it does not follow that a pointwise limit of probability distributions is itself a probability distribution.
If you prefer a different example that doesn’t obfuscate as much what is going on we can do it just as well with the reals. Consider the situation where the nth distribution is uniform on the interval from n to n+1. And look at the limit of that (or if you insist move back to having it speed up over time to make it a supertask). Visually what is happening each step is a little 1 by 1 square moving one to the right. Now note that the limit of these distributions is zero everywhere, and not in the nice sense of zero at any specific point but integrates to a finite quantity, but genuinely zero.
This is essentially the same situation, so nothing in your situation has to do with specific aspects of countable sets.
Wildberger’s complaints are well known, and frankly not taking very seriously. The most positive thing one can say about it is that some of the ideas in his rational trignometry do have some interesting math behind them, but that’s it. Pretty much no mathematican who has listened to what he has to say have taken any of it seriously.
Sure, I know he is not taken very seriously. That is his own point, too.
In the time of Carl Sagan, in the year 1986 or so, I became an anti Saganist. I realized that his million civilization in our galaxy alone is an utter bullshit. Most likely only one exists.
Every single astro-biologist or biologist would have said to a dissident like myself—you don’t understand evolution, sire, it’s mandatory!
20 years later, on this site, Rare Earth is a dominant position. Or at least—no aliens position.
On the National Geographic channel and elsewhere, you still listen “how previously unexpected number of Earth like planets will be detected”.
I am not afraid of mathematicians more than of astrobiologists. Largely unimpressed.
I’m not sure what your point is here. Yes, experts sometimes have a consensus that turns out to be wrong. If one is lucky one can even turn out to be right when the experts are wrong if one takes sufficiently many contrarian positions (although the idea that many millions of civilizations in our galaxy was a universal among both biologists and astro-biologists is definitely questionable), but in this case, the experts have really thought about these ideas a lot, and haven’t gotten anywhere.
If you prefer an example other than Wildberger, when Edward Nelson claimed to have a contradiction in PA, many serious mathematicians looked at what he had done. It isn’t like there’s some special mathematical mob which goes around suppressing these things. I literally had a lunch-time conversation a few days ago with some other mathematician where the primary topic was essentially if there is an inconsistency in ZFC where would we expect to find it and how much of math would likely be salvageable? In fact, that conversation was one of the things that lead me along to the initial question in this subthread.
Neither of these groups are groups you should be afraid of and I’m a little confused as why you think fear should be relevant.
I doubt that any proof in FAI will use infinitary methods.
I’m not sure why you think that. This may depend strongly on what you mean by an in infinitary method. Is induction infinitary? Is transfinite induction infinitary?
Physics is only good, when you expel all the infinities out of it.
Even more so for a subset of physics, such as FAI or molecular dynamics or something.
Well, some of us think that this should be applied to the mathematics itself.
I’m not sure what you mean by this, and in so far as I can understand it doesn’t seem to be true. Physicists use the real numbers all the time which are an infinite set. They use integration and differentiation which involves limits. So what do you mean?
https://physics.aps.org/articles/v2/70
http://blogs.discovermagazine.com/crux/2015/02/20/infinity-ruining-physics/#.Vh0LnHqqpBc
Now, when there in no God, the Infinity is its substitute, most people would love to exist. But it’s just another blunder.
The problem there is that certain specific models of physics end up giving infinite values for measurable quantities—this is a known problem and has been an area of active research since early work with renormalization in the 1930s. This is not at all an attempt to banish infinity in any general sense.
This is rhetoric without content.
Of course it is. Nothing infinite has been spotted so far.
Is it? Is this same “rhetoric” against aliens also without a content? If I say that people want aliens, because they have lost angels, is this really without a content?
Not only that there is no infinite God, even infinite sets are probably just a miracle.
I’m not sure how your sentence is a response to my sentence.
Generally, yes, the content level is pretty low. It essentially amounts to Bulverism, where one is focusing on claimed intents and motives rather than focusing on the substantive issue of whether there’s an inconsistency in PA or ZFC that can arise due to issues with supertasks or other ideas related to infinity.
It may well be that specific people or groups are adopted aliens in a way that is essentially replacing deities. The Raelians and other New Age groups certainly fall into that categoyr. But it is a mistake to therefore claim that in general, people believe in aliens as a replacement for belief in a deity. And it is an even more serious mistake to make such claims about infinite sets. If you see physicists praying to infinite sets, or claiming that infinite sets are responsible for the creation of the universe or humanity, or claim that infinite sets will somehow save us, or claim that infinite sets have an agency to them, or claim that infinite sets have a special mystery and majesty to them that merits worship, or if they start wars with or excommunicate people who don’t believe in infinite sets or believe in a different type of infinite set, then there would be an argument.
I don’t give a damn about infinity. If it is doable, why not? But is it? That’s the only question.
Then, a supertask mixes the infinite set of naturals and we are witnessing “the irresistible force acting on an unmovable object”. What the Hell will happen? Will we have finite numbers on the first 1000 places? We should, but bigger, no matter which will be.
The “irresistible force” is just an empty word. And so is “unmovable object”. And so is “infinity” and so is “supertask”.
Empty words. So every theory which encompasses them is flawed. More than likely.
And yes, supertask can be established in ZFC.
The topic is also exercised here:
http://mathforum.org/kb/thread.jspa?forumID=13&threadID=2278300&messageID=7498035
There is an argument there, but it certainly is not one based on ZFC, since no axiom of set theory says anything about time or what can be accomplished in time.
So you say, ZFC has nothing to do with time? Time in physics is uncovered in ZFC?
Physics is built on top of mathematics, and almost all of mathematics can be built on top of ZFC (there are other choices). But there is as much time in ZFC as there are words in a single pixel on your screen.
I’m not sure what you mean by this, especially given your earlier focus on whether infinity exists and whether using it in physics is akin to religion. I’m also not sure what “it” is in your sentence, but it seems to be the supertask in question. I’m not sure in that context what you mean by “doable.”
I’m not at all sure what this means. Can you please stop using analogies can make a specific example of how to formalize this contradiction in ZFC?
This seems to be essentially the same argument and it seems like the exact same problem: an assumption that an intuitive limit must exist. Limits don’t always exist when you want them to, and we have a lot of theorems about when a point-wise limit makes sense. None of them apply here.
Just answer me a simple question.
How do the first 1000 naturals look like, after mixing supertask described above has finished its job,
You may say that this supertask is impossible.
You may say that there is no set of all naturals.
Whatever you think about it. Everything else is pretty redundant.
I don’t think this conversation is being very productive so this is likely my final reply.
The resulting pointwise limit exists, and it gives each positive integer a probability of zero. This is fine because the pointwise limit of a distribution on a countable set is not necessarily itself a distribution. Please take a basic real analysis course.
Spammer alert!
Why isn’t there a good way of doing symbolic math on a computer?
I want to brush up on my probability theory. I hate using a pen and paper, I lose them, they get damaged, and my handwriting is slow and messy.
In my mind I can envisage a simple symbolic math editor with keyboard shortcuts for common symbols, that would allow you to edit nice, neat latex style equations, as easily as I can edit text. Markdown would be acceptable as long as I can see the equation in it’s pretty form next to it. This doesn’t seem to exist. Python based symbolic math systems, like ‘sagemath’, are hopelessly clunky. Mathematica, although I can’t afford it, doesn’t seem to be what I want either. I want to be able to write math fast, to aid my thinking while proving theorems and doing problems from a textbook, not have the computer do the thinking for me. Latex equation editors I’ve seen are all similarly unwieldy—waiting 10 seconds for it to build the pdf document is totally disruptive to my thought process.
Why isn’t this a solved problem? Is it just that nobody does this kind of thing on a computer? Do I have to overcome my hatred of dead tree media and buy a pencil sharpener?
What would be really nice is tablet software that can translate handwritten math into latex, and compile that into pdf.
By the way, what I think you want is not “doing symbolic math on a computer,” but “having a good input method for equations.”
edit: Also can someone please write a good modern programming language for typesetting? With all due respect to Dr. Knuth, tex is awful.
TeX as a language is awful, but what it can do is wonderful. And of course everyone uses LaTeX (TeX made usable by Lamport), or at least I do, so I see little of the TeX language itself. There was nothing like it when Knuth created it, and almost forty years on, there is still nothing like it. As far as I know, the only other typesetting language that has gained even a niche is the hideous SGML, in comparison to which TeX is a thing of superlative elegance and beauty. TeX has a specialised sublanguage for mathematics, both usable for input (so far as linear text can be) and generating high-quality output, so it became the standard for document preparation in the mathematically based sciences. It’s still inferior to human typesetting, but that’s only available for final printer’s copy. What you had to do back then, well, trip down memory lane omitted for brevity.
To do better than TeX, at this point, needs a lot more than coming up with a better language to think about typesetting with. It will have to replicate the TeX ecosystem, provide two-way conversion between it and TeX, and have a visual interface. Visual interfaces for programming languages are really hard, and they generally don’t get developed beyond demos that wow audiences and then go nowhere.
And it has to be done by one person, because a committee will just create a bloated, Turing-complete mess.
Which is why it hasn’t happened. It needs someone with an expert passion for programming, technical typesetting, design, and languages considered as a medium of thought. Knuth, Jony Ive, and Dijkstra all in one. But anyone like that would have bigger things to do with their talents.
Yes, I understand all that. It is hard to move away from shitty languages once they gained market share.
But latex, while improving on many things compared to base tex is hobbled by tex as well (for example, why do I need to recompile to resolve references, haven’t we invented multipass compilation like half a century ago?) I am happy to double down on “(La)tex is a shitty language.” It’s very useful of course, but the state of typesetting today is sort of like if everyone programmed in Cobol for some reason.
That depends on what you consider to be big. It’s not big by the standards of academia. But it might be big by the standards of real world impact.
I tend to use TeXmacs for this. It’s a WYSIWYG document editor; you can enter mathematics using (La)TeX syntax, but there are also menus and keyboard shortcuts. It’s free in both senses. No symbolic-manipulation capabilities of its own, but it has some ability to connect to other things that do; I haven’t tried those out.
Mathematica isn’t that far from what you want, I think, and it has the advantage of being able to do a lot of the symbolic manipulation for you. But, as you say, it’s really expensive—though if you haven’t checked out the home and (if applicable) student editions, you should do so; they’re much cheaper. Anyway, the fact that to me it sounds close to what you want makes me suspect that I’m missing or misunderstanding some of your requirements; if you could clarify how it doesn’t meet your needs it may help with suggesting other options.
YES. Thank you so much. Texmacs seems to be exactly what I wanted.
Excellent! I will mention that I have occasionally had it crash on me (this was in the past, probably an older version of the software, so take it with a grain of salt—but you might want to be slightly more paranoid about saving your work regularly than you would be with, say, a simple text editor).
Been using it for an hour now,and yes, it’s crashed on me once, but no more than half the other programs I use. Already seeing the benefits of it when I spent half an hour doing something, realised there was a mistake at the start, and could then just find/replace stuff instead of scrunching the paper up into a ball and cursing Pierre Laplace. Also I don’t have to deal with the aesthetic trauma of viewing my own handwriting. Outstanding.
Would any of these be useful? That’s just a list I found by Googling /MathJax editor/. I’m not familiar with any of them. MathJax is a Javascript library for rendering mathematics on web pages. The mathematics is written in MathML.
I use pen and paper, and switch to LaTeX when I have something I need to preserve. It’s not very satisfactory, but since anything I might want to publish will have to go through LaTeX at some point, there’s no point in using any other format, unless it had a LaTeX exporter. And pen and paper is far more instant than any method I can imagine of poking mathematics in through a keyboard.
Yeah… I think I just have to bite this bullet. If you do math professionally and the people you know work onto pen and paper, then that’s the answer.
It’s just.… I feel like I can imagine a system that would be better than pen and paper. There’s so much tedious repetition of symbols when I do algebra on paper, and inevitably while simplifying some big integral I write something wrong, and have to scratch it out, and the whole thing becomes a confusing mess. writing my verbal thoughts down with a keyboard is just as quick and intuitive as a pen and paper. There must be a better way...
Would it make sense to write on a tablet and have the computer do OCR? (Hypothetical system.)
Yes, that would also be great, but I a) I can’t afford such a tablet, and b) I strongly suspect that the OCR would be inaccurate enough that I’d end up wishing for a keyboard anyway. Hell accurate voice recognition would be better, but I’m still waiting for that to happen...
Now that I think about it, OCR would be much harder for math than for text.
Kinda-toy example
That means there’s a possible startup.
Ha, in theory, but it looks like the guys at TeXmacs are already selling the product for free, so no dice...
I made with my Kindle the experience that it’s better than regular paper books while reading books on a smartphone isn’t. Currently most mathmaticians use paper. If someone would design a mathematical editor that’s better than paper, I think that could be a huge commercial success.
I don’t know either of a program that solves your problem. But writing a transcompiler from mathematical markdown (mathdown?) to Latex should not be that difficult in F#. It should be a fun excercise, if you write the formal grammar.
Yeah I can imagine doing that all right—I wouldn’t actually mind writing in latex even, the problem is the lag. Building a latex document after each change takes time. If the latex was being built in a window next to it, in real time, (say a 1 second lag would probably be fine) there’d be no problem. I’m not looking to publish the math, I just want a thought-aid.
I believe that there is an editor called lyx that lets you do this.
What areas of mathematics do I need to learn if I want to specialize in formal epistemology?
Linear algebra, function optimization, probability theory.
:)
That’s it?
I suppose modal logics of belief.
Thanks! Ok, so now a more detailed question:
As I said, I’d like to do formal epistemology. I’m an undergrad right now, and I need to decide on my major. If that’s about all the formal stuff I’ll need then there are a bunch of different majors that include that, and the question becomes which additional courses could help with formal epistemology or related disciplines.
Here’s what I’ve come up with so far:
Choice 1: Applied Statistics. This allows several electives in other subjects, so I could do e.g. a minor in CS with only one or two extra course requirements.
Choice 2: Mathematical Statistics. Less electives in other subjects, more electives in math/stats. I could still probably do a CS minor along with it if I wanted.
Choice 3: Math degree, possibly with a stats focus.
Choice 4: Some other degree (e.g., CS, economics) and just make sure to get the probability theory in at some point.
I’m anyway doing a minor in philosophy, which includes at least some logic.
“Math sophistication” is good, as is familiarity with basic stats and ML. In computer science depts., ML is often taught at the grad level, though. Specific major not so important.
I found reading and doing proofs paid a lot of dividends.
Inspired by an interview answer given by Thiel to Ferris, I ask:
How can you become less competitor in order that You become more successful?
Who are the smartest people you talk to on an ongoing basis and do you learn from them?
Not sure what exactly you meant here. But if you want to avoid being “one of many people doing the same stuff”, your options are, approximately:
find something no one else does. Problem is, other people may follow you, so this itself is not enough.
build a brand. No one else can produce your brand, so now you meta-compete with other brands.
establish a monopoly. Try to put yourself in a position where other people can’t compete with you because they lack some critical resource.
make a cartel with your competitors, or bribe a government official to make competition illegal. This is technically illegal, but not unusual. Be sure you have the right friends, otherwise you may risk prison.
Thiel goes a bit deeper on 1. in his book.
The closest thing my country has a functioning libertarian political party is conisdering signing up to a campaign tragedy firm from the USA called i320. They collect data on voters then analyse it and spit out recommendations. But the data collection is done by volunteers from the party. I reckon it will a bad idea because the party won’t be able to switch data analytics firms in the future without losing access to data. I happens to meet the party’s president the other day and he said to talk to his Vice President. I reckon they could work out a contract to give them data ownership but. I doubt the firm will judge on that front. Any advice?
I’ve been doing a version of intermittent fasting in which I eat one meal per day for around three months now, and I’ve lost a lot of weight. However, I’ve been having acid reflux (minus the heartburn) for slightly longer than this, and despite having been on a strong dose of generic Proton Pump Inhibitor for the last two and a half months, I’m still suffering quite a bit. It also seems like eating a lot at once can exacerbate acid reflux, so I’m considering going back to a regular diet for a while to see what happens. Maybe I’ll try eating exactly twice a day, first. Since it seems like intermittent fasting is somewhat common here, has anyone else had similar issues?
Junior doctor here.
Different PPIs tend to work the same as each other. PPIs are pretty safe drugs, but having ongoing acid reflux is itself not that good for your health. You could try to reduce it by staying vertical for a while after eating, by spacing your meals into at least two per day (even two within a couple hours), and doing any other simple suggested lifestyle measures.
Adding or switching to a different class of antireflux drug seems rash if you can just fix things with a lifestyle change.
This seems like good advice; thanks! I hadn’t looked into switching drugs, but I had been curious as to whether switching PPI’s might be helpful, so that’s good to know.
I started an IF schedule where I eat from 4pm until 8pm a few months ago. I did have acid reflux issues in the beginning, but that stopped after a couple of weeks. In my experience, the acid reflux is worse if you eat shortly before going to bed. (In the beginning I ate until 9pm and went to bed at 10pm. Now I’m eating from 5pm to about 6:30 and go to bed at 10, with no problems. (I’ve had a sore throat for the last 4 years or so, but other than the acid reflux thing when I started IF, this has pretty much remained unchanged, so I’m assuming the intermittent fasting isn’t making it better or worse.))
So you could try taking a ≈2h break before going to bed (if you’re not doing that already), eating twice a day, experimenting with different foods, talking to a doctor, and if you still feel bad after that, I would suggest going back to a regular diet. Three months seems like enough time for the body to adjust as much as it’s ever going to.
Too many proton pump inhibitors may interfere with your absorption of minerals. You may want to have your blood tested for deficiencies.
Have you tried sleeping on your left side?
I’m writing to solicit any particular questions you may have that I can keep in mind as I read, with a view to clearing up your question having pondered them while listening to the books.
I’m listening to 1 of 3 audiobooks tomorrow (I’m running a 40k marathon so I have plenty of time, and generally if I find an audiobook uncompelling that is a stopping rule for me and I will shift to another book.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
Expert Political Judgment: How Good is it? How can We Know? by Philip E. Tetlock
Zero to One by Peter Thiel, Blake Masters
Further, I would like to take this opportunity to solicit community reviews of the book Mindware by Nisbett. I have read compelling reviews:
I am also interested to hear Lukeprog style notes on each of these books, if any.
Some of my favourite’s of his:
Wired for war 1
WFW 2
WFW 3
Better angels of our nature
Better angels of our nature 2
BAOON 3
Things that make us happy now may not make us happy in the future
How much, if anything, would you be prepared to precommit to donate to MIRI in return for a public announcement from their part to publish their complete and uncensored technicial research agenda?
iniDewa.net Agen Poker Domino QQ Ceme Blackjack Online Indonesia Marspoker Situs Judi Poker Domino Online Terpercaya
How can one assemble the next ‘Paypal Mafia’?
I don’t know, but if you could get a working plan by asking on public boards, I’m pretty sure it wouldn’t be worth billions of dollars.
Get a bunch of really smart people that are good at what they do and want to change the world, and get them to work for you,
Please take this as a given: it is the job of adult males to impregnate as many females as possible, and it is the job of adult females to find a mate with resources, resources meaning wits, speed, strength, savvy, . . .whatever.
To some extent, thinking logically runs counter to this.
Ergo, FWIW, use your head as a second opinion, not the first.
Why?
High school is hard.