Open Thread, Jun. 22 - Jun. 28, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- 4 Dec 2015 17:28 UTC; 12 points) 's comment on LessWrong 2.0 by (
- 23 Jun 2015 23:19 UTC; 7 points) 's comment on The great quote of rationality a la Socrates (or Plato, or Aristotle) by (
- 29 Dec 2015 5:32 UTC; 4 points) 's comment on Open Thread, Dec. 28 - Jan. 3, 2016 by (
A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom’s book was made recently.
The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom’s argument. Although I don’t know what to think of their (subjective) probability for possible outcomes.
Although not improving my life at all; I quite like the short story as an analogy for UFAI risks.
Hope this is appropriate for here.
I had an epiphany related to akrasia today, though it may apply generally to a problem where you are stuck: For the longest time I thought to myself: “I know what I actually need to do, I just need to sit down and start working and once I’ve started it’s much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: “I know what I need to do, I just don’t know what I need to do, so I can do what I need to do.” (I hope that makes sense). And then it hit me: I have no fucking clue what I actually need to do. It’s like I’ve been trying to empty a sinking ship of water with buckets, instead of fixing the hole in the ship.
Reminds me in hindsight of the “definition of insanity”: “The definition of insanity is doing the same thing over and over and expecting different results.”
I think I believed, that I lacked the necessary innate willpower to overcome my inner demons, instead of lacking a skill I could acquire.
Once I was facing akrasia and I kind of had the same thing happen. I knew what I needed to do, and I ruminated on why I wasn’t doing that.
I thought at first that I was just being lazy, but then I realized that I subconsciously knew that the strategy I was procrastinating from was actually pretty terrible. Once I realized that, I started thinking about how I might do it better, and then when I thought of something (which wasn’t immediate, to be sure) I was actually able to get up and do it.
Sometimes “laziness” is being aware on some level that your current plan does not work, but not knowing a better alternative… so you keep going, but you find yourself slowing down, and you can’t gather enough willpower to start running again.
sounds like a growth mindset discovery! Congratulations!
For my benefit can you try to rephrase this sentence with alternative words or in a more verbose form:
mainly a taboo on the multiple meanings of the word need that you tried to express. without knowing the tone; it just sounds confusing.
Meta: I suspect people have rewarded you for achieving an epiphany.
Let’s say, I have some homework to do. In order to finish the homework, at some point I have to sit down at my desk and start working. And in my experience, actually starting is the hardest part, because after that I have few problems with continuing to work. And the process of “sitting down, opening the relevant programs and documents and starting to work” is not difficult per se, at least physically. In a simplified form, the steps necessary to complete my homework assignment are:
Open relevant documents/books, get out pen and paper etc.
Start working and don’t stop working.
Considering how much trouble I have getting to the point where I can do step one (sometimes I falter between steps one and two), there must be at least one necessary step zero before I am able to successfully complete steps one and two. And knowing steps one and two does not help very much, if I don’t know how to get to a (mental) state where I can actually complete them.
A different analogy: I know how I can create a checkmate if I only have a rook and king, and my opponent only a king. But that doesn’t help me if I don’t know how to get to the point where only those pieces are left on the board.
A suggestion. Commit to a small amount of the work. i.e. instead of committing to utilising a local gym, commit to arriving at the gym. after which if you decide to go home you can; but at least you break down the barrier to starting.
In the homework case, commit to sitting down and doing the first problem. Then see if you feel like doing any more than that.
Deep Learning is the latest thing in AI. I predict that it will be exactly as successful at achieving AGI as all previous latest things. By which I mean that in 10 years it will be just another chapter in the latest edition of Russell and Norvig.
Purely on Outside View grounds, or based on something more?
Outside View only. That’s the way it’s always worked out before, and I’m not seeing anything specific to Deep Learning to suggest that this time, it will be different. But I am not a professional in this field.
So, some Inside View reasons to think this time might be different:
The results look better, and in particular, some of Google’s projects are reproducing high-level quirks of the human visual cortex.
The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn’t have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore’s Law is producing a difference in kind.
That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don’t run on logic at all. We’re making progress, and I wouldn’t be surprised if deep learning is part of the first AGI.
While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn’t seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?
There is the ‘one learning algorithm’ hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc.
The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it’s auditory cortex just learned how to do vision instead.
This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?
The outside view is not very good for predicting technology. Every technology has an eternity of not existing, until suddenly one day it exists out of the blue.
Now no one is saying that deep learning is going to be AGI in 10 years. In fact the deep learning experts have been extremely skeptical of AGI in all forms, and are certainly not promoting that view. But I think it’s a very reasonable opinion that it will lead to AGI within the next few decades. And I believe sooner rather than later.
The reasons that ‘this time it is different’:
NNs are extraordinarily general. I don’t think you can say this about other AI approaches. I mean search and planning algorithms are pretty general. But they fall back on needing heuristics to shrink the search space. And how do you learn heuristics? It goes back to being a machine learning problem. And they are starting to solve it. E.g. a deep neural net predicted Go moves made by experts 54% of the time.
The progress you see is a great deal due to computing power advances. Early AI researchers were working with barely any computing power, and a lot of their work reflects that. That’s not to say we have AGI and are just waiting for computers to get fast enough. But computing power allows researchers to experiment and actually do research.
Empirically they have made significant progress on a number of different AI domains. E.g. vision, speech recognition, natural language processing, and Go. A lot of previous AI approaches might have sounded cool in theory, or worked on a single domain, but they could never point to actual success on loads of different AI problems.
It’s more brain like. I know someone will say that they really aren’t anything like the brain. And that’s true, but at a high level there are very similar principles. Learning networks of features and their connections, as opposed to symbolic approaches.
And if you look at the models that are inspired by the brain like HTM, they are sort of converging on similar algorithms. E.g. they say the important part of the cortex is that it’s very sparse and has lateral inhibition. And you see leading researchers propose very similar ideas.
Whereas the stuff they do differently is mostly because they want to follow biological constraints. Like only local interactions, little memory, only single bits of information at a time. And these aren’t restrictions that real computers have too much, so we don’t necessarily need to copy biology in those respects and can do things differently, and even better.
Several of the above claims don’t seem that true to me.
Statistical methods are also very general. And neural nets definitely need heuristics (LSTMs are basically a really good heuristic for getting NNs to train well).
I’m not aware of great success in Go? 54% accuracy is very hard to interpret in a vaccuum in terms of how impressed to be.
When statistical methods displaced logical methods it’s because they led to lots of progress on lots of domains. In fact, the delta from logical to statistical was probably much larger than the delta from classical statistical learning to neural nets.
I consider deep learning to be in the family of statistical methods. The problem with previous statistical methods is that they were shallow and couldn’t learn very complicated functions or structure. No one ever claimed that linear regression would lead to AGI.
That narrows the search space to maybe 2 moves or so per board. Which makes heuristic searching algorithms much more practical. You can not only generate good moves and predict what a human will do, but you can combine that with brute force and search much deeper than a human as well.
I mean that NNs learn heuristics. They do require heuristics in the learning algorithm, but not ones that are specific to the domain. Whereas search algorithms depend on lots of domain dependent, manually created heuristics.
Revisited The Analects of Confucius. It’s not hard to see why there’s a stereotype of Confucius as a Deep Wisdom dispenser. Example:
I read a bit of the background information, and it turns out the book was compiled by Confucius’ students after his death. That got me thinking that maybe it wasn’t designed to be passively read. I wouldn’t put forth a collection of sayings as a standalone philosophical work, but maybe I’d use it as a teaching aid. Perhaps one could periodically present students a saying of Confucius and ask them to think about it and discuss what the Master meant.
I’ve noticed this sort of thing in other works as well. Let’s take the Dhammapada. In a similar vein, it’s a collection of sayings of Buddha, compiled by his followers. There are commentaries giving background and context. I’m now getting the impression that it was designed to be just one part of a neophyte’s education. There’s a lot that one would get from teachers and more senior students, and then there are the sayings of the Master designed to stimulate thought and reflection
Going further west, this also seems to be the case with the Gospels.
With these works and those like them, there’s this desire to stimulate reflection and provide a starting point for discussion. They’re designed for initiates of a school of thought to progress further. Contrast this with works written by the masters themselves for their peers. It would be condescending to talk in short bursts of wisdom. No, this is where we get arguments clearly presented and spelled out. Short sayings are replaced with chains of reasoning designed to demonstrate the intended conclusion.
It would be condescending for the master too, to talk in short bursts of wisdom to his disciples, as long as he was alive. The issue is rather that once he dies, and the top level disciples gradually elevate the memory of the master into a quasi-deity, pass on the thoughts verbally for generations, and by the time they get around to writing it down the memory of the master is seen as such a big guy / deity and more or less gets worshipped so it becomes almost inconceivable to write it in anything but a condescending tone. But it does not really follow the masters were just as condescending IRL.
You can see this today. The Dalai Lama is really an easy guy, he does not really care how people should behave to him, he is just friendly and direct with everybody, but there is an “establishment” around him that really pushes visitors into high-respect mode. I had this experience with a lower lama, of a different school, I was anxious about getting etiquette right, hands together, bowing etc. then he just walked up to me, shook my hand in a western style, did not let it go but just dragged me halfway accross the room while patting me on the back and shaking with laughter at my surprise, it was simply his joke, his way of breaking the all too ceremonious mood. He was a totally non-condescending, direct, easy-going guy, who would engage everybody on an equal level, but a lot of retainers and helpers around him really put him and his boss (he was something of a top level helper of an even bigger guy too) on a pedestal.
Good point. I suppose what I had in mind is that when the disciple asks the master a question, the master can give a hint to help the disciple find the answer on his own. Answering a question with a question can prod someone into thinking about it from another angle. These are legitimate teaching methods. Using them outside of a teacher/student interaction is rather condescending, however.
This is also a major factor. Disciples like to make the Master into a demigod and some of his human side gets lost in the process.
Do people who take modafinil also drink coffee (on the same day)? Is that something to avoid, or does it not matter?
It seems to have a synergistic effect but I regularly drink coffee and take modafinil irregularly so it’s hard to say. It doesn’t seem bad by any means.
I went to the dermatologist and today and I have some sort of cyst on my ear. He said it was nothing. He said the options are to remove it surgically, to use some sort of cream to remove it over time, or to do nothing.
I asked about the benefits of removing it. He said that they’d be able to biopsy it and be 100% sure that it’s nothing. I asked “as opposed to… how confident are you now?” He said 99.5 or 99.95% sure.
It seems clear to me that the costs of money, time and pain are easily worth the 5/1000(0) chance that I detect something dangerous earlier and correspondingly reduce the chances that I die. Like, really really really really really clear to me. Death is really bad. I’m horrified that doctors (and others) don’t see this. He was very ready to just send me home with his diagnosis of “it’s nothing”. I’m trying to argue against myself and account for biases and all that, but given the badness of death, I still feel extremely strongly that the surgery+biopsy is the clear choice. Is there something I’m missing?
Also, the idea of Prediction Book for Doctors occurred to me. There could be a nice UI with graphs and stuff to help doctors keep track of the predictions they’ve made. Maybe it could evolve into a resource that helps doctors make predictions by providing medical info and perhaps sprinkling in a little bit of AI or something. I don’t really know though, the idea is extremely raw at this point. Thoughts?
1) surgery is dangerous. Even innocuous surgeries can have complications such as infection that can kill. There’s also complications that aren’t factored into the obvious math, for example ever since I got 2 of my wisdom teeth out, my jaw regularly tightens up and cracks if I open my mouth wide, something that never happened beforehand. I wasn’t warned about this and didn’t consider it when I was deciding to get the surgery.
2) If it’s something dangerous, you’re very likely to find out anyway before it becomes serious. eg, if it’s a tumor, it’s going to keep growing and you can come back a month later and get it out then with little problem.
3) even if it’s not nothing, it might be something else that’s unlikely to kill you. Thus the 5/1000 chance of death you’re imagining is actually a 5/1000 chance of being not nothing.
Are you just making these points as things to keep in mind, or are you making a stronger point? If the latter, can you elaborate? Are you particularly knowledgeable?
The point is your consideration of “if surgery, definitely fine” vs “if no surgery, 5/1000 chance of death” are ignoring a lot of information. You’re acting like your doctor is being unreasonable when in fact they’re probably correct.
Stronger point: since we are at Less Wrong, think Bayes Theorem. In this case, a “true positive” would be cancer leading to death, and a “false positive” would be death from a medical mishap trying to remove a benign cyst (or even check it further). Death is very bad in either case, and very unlikely in either case.
P (death | cancer, untreated) - this is your explicit worry P (death | cancer, surgery) P (death | benign cyst, untreated) P (death | benign cyst, surgery) - this is what drethelin is encouraging you to note P (benign cyst) P (cancer)
My prior for medical mishaps is higher than 0.5% of the time, but not for fatal ones while checking/removing a cyst near the surface of the skin. As drethelin’s #2 notes, this is not binary. If it is not a benign cyst, you will probably have indicators before it becomes something serious. Similarly, you have non-surgical options such as a cream or testing. Testing probably has a lower risk rate than surgery, although if it is a very minor surgery, perhaps not that much lower.
If the cyst worries you, having it checked/removed is probably low risk and may be good for your mental health. But now we might have worried you about the risks of doing that (sorry) when we meant to reduce your worries about leaving the cyst untreated.
In general if you list everything you can think of and give it probability scores, you ignore unknown unknowns. For medical interventions like surgery unknown unknowns are more likely to be bad than to be good.
As a result it’s useful to have a prior against doing a medical intervention if there no strong evidence that the intervention is beneficial.
Maybe we need to visualize surgery different. I used to think about it like replacing a part in a car. Why not just do it if the part is not working too well.
Maybe we should see it as damage. It’s like someone attacking you with a knife. Except that the intention is completely different, they know what they are doing, their implements are far more precise and so on, so the parallel is not very good either, I am just saying that “recovering from an appendicitis” could be at least visualized as something closer to “recovering after a nasty knife fight” than to “just had the clutch in my car replaced”.
What do you think?
Why do you think we need to do so?
agreed; if you are getting it done; and prefer the higher chance of life; get it done without being fully anaesthetized.
Possibly by a plastic surgeon; they seem to have profits to burn on quality equipment from people doing unnecessary (debatable) cosmetic procedures.
You’re probably misreading your doctor.
When he said “99.5 or 99.95%” I rather doubt he meant to give the precise odds. I think that what he meant was “There is a non-zero probability that the cyst will turn out to be an issue, but it is so small I consider it insignificant and so should you”. Trying to base some calculations on the 0.5% (or 0.05%) chance is not useful because it’s not a “real” probability, just a figurative expression.
Great point. He did seem to pause and think about it, but still a good point. It seems notably likely that you’re right, and even so, I doubt that his confidence is well-calibrated.
I think you should use the cream for a week, to start with.
Also, thought experiment: Suppose a person is going to live another 70 years. If undergoing some oversimplified miracle-cure treatment will cost, one way or another, 1 week of their life, what chance of “it’s just a cyst” will they accept? 99.97%. So from the doctor’s perspective (neglecting other risks or resources used, taking their ’99.95%′ probability estimate at face value, and assuming that a biopsy is some irreplaceable road to health), your condition is so likely to be benign that the procedure to surgically check spends your life at about the same rate as it saves it.
The biggest thing is that the doctor’s priorities are not your priorities. To him, a life is valuable… but not infinitely valuable -estimates usually puts the value of a life at (ballpark) 2 million dollars. When you consider the relative probability of you dying, and then the cost to the healthcare system of treatment, he’s probably making the right decision (you of course, would probably value your own life MUCH MUCH higher). Btw, this kind of follows a blindspot I’ve seen in several calculations of yours—let me know if you’re interested in getting feedback on it.
Finally, there are two other wrinkles—the possibility of complications and the possibility of false positives from a biopsy. The second increases the potential cost, and the first decreases the potential years added to your life. Both of these tilt the equation AGAINST getting it removed.
The doctor has no incentive to minimize the cost of treatment. He makes money by having a high cost of treatment.
Right, MattG is 100% backwards.
Even adamzerner probably doesn’t value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.
That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: “I give you n dollars in exchange for me killing you.” regardless of n, therefor the financial value of your own life is almost always infinite*.
*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).
I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)
Good point.
(Two people mentioned this so I figure I’ll just reply here.)
Re: doctors perspective. I see how it might be rational from his perspective. My first thought is, “why not just give me the info and let me decide how much money I’m willing to invest in my health?”. I could see how that might not be such a good idea though. From a macro perspective, perhaps those sorts of transaction costs might not be worth the benefits of added information → increased efficiency? Plus it’d be getting closer to admitting to how much they value a life, which seems like it’d be bad from an image perspective
I guess what I’m left with is saying that I find it extremely frustrating, I’m disappointed in myself for not thinking harder about this, and I’m really really glad you guys emphasized this so I could do a better job of thinking about what the interests are of parties I interact with (specifically doctors, and also people more generally). I feel like it makes sense for me to be clear that I would like information to be shared with me and that I’m willing to spend a lot of money on my health. And perhaps that it’s worth exercising some influence on my doctors so they care more about me. Thoughts?
The doctor you are with has a financial interest to treat you. When he advises you against doing something about the cyst he’s acting against his own financial interests.
Overtreatment isn’t good if you value life very much. Every medical interventions comes with risks. We don’t fully understand the human body, so we don’t know all the risks.
From the perspective of the doctor the question likely isn’t: “How much money is the patient willing to invest in health” but “How much is the patient willing to invest for the cosmetic issue of getting rid of an ugly cyst”.
If the surgery isn’t necessary, and something goes wrong during it, does the doctor need to worry about getting sued?
If I remember right the best predictor for a doctor getting sued is whether patients perceive the doctor to be friendly.
Advising against a unnecessary practice might be malpractice but informing a patient about the option to do so, especially when there are cosmetic reasons for it, shouldn’t be a big issue.
Even good doctors can get sued. But it speaks to more about why people sue; (doctors did a bad human-interaction job rather than they did a negligent job)
I do wonder about the nature of doctoring. Do you happen to get 3% (arbitrary number) wrong; and if you are also bad at people-skills, this bites you. whereas if you get 3% wrong and you are good at people skills you avoid being sued 99% of those 3% of cases.
A perspective on the nature of medical advice: There exist people who are so concerned about not dying that they would do anything in their power to survive medically, and organise for themselves regular irrelevant medical tests. They are probably over-medicated and wasting a lot of time. i.e. a brain scan for tumours (where no reason to think they exist is present). There exist people who get yearly mammograms. there exist people who probably get around to their (reccomended yearly) mammogram every few years. There exist people who have heart attacks from long term lifestyle choices. There exist people who are so not concerned about dying that they smoke.
This is the range of patients that exist. You sound like you are closer to the top in terms of medical concern. The dermatologist has to consider where on the spectrum you are when devising a treatment as well as where the condition is on the spectrum of risk.
For a rough estimate (not a doctor) I would say the chance of a cyst on your ear killing you in the next 50 years would be less than the chance of getting an entirely different kind of cancer and having it threaten your life. (do you eat burnt food? bowel cancer risk. Do you go in the sun? skin cancer risk)
If it can be removed by cream; it will still be gone. The specialist should suggest a biopsy to cover their ass, but really; it could be 99 different types of skin growths or few type of cancerous growth. With no other symptoms there is no reason to suspect any danger exists.
the numbers you suggested sound like they were fabricated when given to you. Which is a reason to not mathematically attack them; but take them on the feeling value of 99.99% thumbs up. (and its really hard and almost impossible to find 0.01% so medically we don’t usually bother)
My advice would be:
1) See another doctor to get a second opinion. (And possibly a third opinion, if you don’t like the second doctor.) Keep looking for a doctor until you find one that explains things to you in enough detail so that you understand thoroughly. Write down the questions you want answered ahead of time, and take notes during your appointment. “I am confident” is a bullshit answer unless you understand what possibilities the doctor considered, why the doctor thinks this one is the most likely, what the possible approaches to dealing with it if it turns out to be “not fine” are, and their advantages and disadvantages, what warning signs to look for that might indicate it is not fine, and the mechanism by which the cream option would work.
Unfortunately, the state of medical knowledge is such that there may not be good answers to all of the questions. The best the doctor may be able to do is “I don’t know” for some of them. But you can get a better understanding of the situation than you have now, and a better understanding of where there are gaps in the medical knowledge.
2) Read a bunch of scientific papers about cysts and biopsies and tests so that you understand the possibilities and the risks better.
3) Also read about medical errors and risks of surgeries. People following doctor’s instructions is one of the leading causes of death in the USA. I read an article about it in JAMA a few years ago. There might be more up-to-date papers about it by now. Having a medical procedure done is not a neutral option when it comes to affecting your chances to continue living.
For example, here’s a paper that indicates that prostate biopsies could increase the mortality rate in men. This is just one study, not enough information to make an informed decision.
Boniol M, Boyle P, Autier P, Perrin P. Mortality at 120 days following prostatic biopsy: analysis of data in the PLCO study. Program and abstracts of the 2013 American Society of Clinical Oncology Annual Meeting and Exposition; May 31-June 4, 2013; Chicago, Illinois. Abstract 5022. http://onlinelibrary.wiley.com/doi/10.1002/ijc.23559/full
(deleted—everything I said was said by others already)
Saying 99.9999% seems a mouthful. Would you have preferred an answer like this instead: https://www.youtube.com/watch?v=7sWpSvQ_hwo :)
If brevity was the issue, I wouldn’t have expected him to say 5 instead of 9. And I would have expected him to use stronger language than he did. My honest impression is that he thinks that the chances that it’s something are really small, but nothing approaching infinitesimally small.
I’d say an expert in any field has better intuitions (hidden, unverbalized knowledge) than what they can express in words or numbers. Therefore, I’d assume that the decision that it’s not worth doing the examination should take priority over the numerical estimate that he made up after you asked.
It may be better to ask the odds in such cases, like 1 to 10,000 or 1 to a million. Anyway, it’s really hard to express our intuitive, expert-knowledge in such numbers. They all just look like “big numbers”.
Another problem is that nobody is willing to put a dollar value on your life. Any such value would make you upset (maybe you are the exception, but most people probably would). Say the examination costs $100 (just an example). Then if he’s 99.95% sure you aren’t sick, and 0.05% sure you are dying and sends you home, then he (rather your insurance) values your life at less than $200,000. This is a very rough estimation, but it seems in the right ballpark for what a general stranger’s life seems to be valued by the whole population. Of course it all depends on how much insurance you pay, how expensive the biopsy is etc. Maybe you are right that you deserve to be examined for your money, maybe not. But people tend to avoid this sort of discussion because it is very emotionally-loaded. So we mainly mumble around the topic.
People are dying all the time out of poverty, waiting on waiting lists, not having insurance, not being able to pay for medicaments. But of course people who have more money can override this by buying better medical care. Depending on the country there are legal and not-so-legal methods to get better healthcare. You could buy a better package legally, put some cash in the doctor’s coat, etc.
You need to consider that the people who’d do your biopsy can do other things as well, for example work on someone’s biopsy who has a chance of 1% of dying instead of your 0.05% (assuming this figure is meaningful and not just a forced, uncalibrated guess).
If you confronted your doctor with these things, he’d probably prefer to just revoke that probability estimate and just say his expert opinion is that you don’t need the biopsy, end of story. It would be very hard for you to argue with this.
It’s quite easy to get more expensive healthcare. On the other hand that doesn’t mean the healthcare is automatically better.
If you are willing to pay for any treatment out of your own pocket then a doctor can treat you in a way that’s not being payed for by an insurance company because it’s not evidence-based medicine.
It can still be evidence-based, just on a larger budget. I mean, you can get higher quality examinations, like MRI and CT even if the public insurance couldn’t afford it. Just because they wouldn’t do it by default and only do it for your money doesn’t mean it’s not evidence based. Evidence-based medicine doesn’t say that this person needs/doesn’t need this treatment/examination, it gives a risk/benefit/cost analysis. The final decision also depends on the budget.
It seemed to me that the proposition was made under false assumptions. Specifically, I value my life way more than most people do, and I value the costs of time/money/pain less than most people do. He seemed to have been assuming that I value these things in a similar way to most people.
Yeah, I understand this now. Previously I hadn’t thought enough about it. So given that I am willing to spend money for my health, and that I can’t count on doctors to presume that, it seems like I should make that clear to them so they can give me more personalized advice.
How do you know? Because you do things like flossing every day? Healthcare economics quite frequently mean that a person prefers to pay more rather than less to signal to themselves that they do everything in their power to stay alive.
People quite frequently make bad health decisions because buying an expensive treatment feels like they do something to stay healthy will it’s much more difficult emotionally to do nothing.
I understand that for a lot of people, the X isn’t about Y thing applies. That investing in health might be about signaling to oneself/others something. But I assure you that I genuinely do care. Maximizing expected utility is a big part of how I make decisions, and I think that things that reduce the chances of dying have very large expected utilities (given the magnitude of death). That said, I’m definitely not perfect. I ate pizza for lunch today :/
“Willing to spend money” meaning that you’re willing to pay out of pocket for medical procedures? Or that you are willing to fight your insurance so that it pays for things it doesn’t think necessary?
And doctors are supposed to ignore money costs when recommending treatment (or lack of it) anyway. If you want “extra attention”, I suspect that you would need to proactively ask for things. For example, you can start by doing a comprehensive blood screen—and I do mean comprehensive—including a variety of hormones, a metals panel, a cytokine panel, markers for inflammation, thryroid, liver, etc. etc. You will have to ask for it, assuming you’re reasonably healthy a normal doctor would not prescribe it “just so”.
I’m willing to spend out of pocket. More generally, I value my life a lot, and so I’m willing to undergo costs in proportion to how much I value my life.
You’re constrained by the size of your pocket :-) Being willing to spend millions on saving one’s life is not particularly relevant if you current bank balance is $5.17.
Very rich people can (and do) hire personal doctors. That, however, has its own failure modes (see Michael Jackson).
Yeah, I know. It’s just hard to be more specific than that. I guess what I mean is that I am willing to spend a much larger portion of my money on health than most people are.
Is that a revealed preference? ;-)
Inspired by terrible, terrible Facebook political arguments I’ve observed, I started making a list of heuristic “best practices” for constructing a good argument. My key assumptions are that (1) it’s unreasonable to expect most people to acquire a good understanding of skepticism, logic, statistics, or the ways the LW-crowd thinks of as how to use words rightly, and (2) lists of fallacies to watch out for aren’t actually much help in constructing a good argument.
One heuristic captured my imagination as it seems to encapsulate most of the other heuristics I had come up with, and yet is conceptually simple enough for everyone to use: Sketch it, and only draw real things. (If it became agreed-upon and well-known, I’d shorten the phrase to “Sketch it real”.)
Example: A: “I have a strong opinion that increasing the minimum wage to $15/hr over ten years (WILL / WON’T) increase unemployment.” B: “Oh, can you sketch it for me? I mean literally draw the steps involved with the real-world chain of events you think will really happen.”
If you can draw how a thing works, then that’s usually a very good argument that you understand the thing. If you can draw the steps of how one event leads to another, then that’s usually a good argument that the two events can really be connected that way. This heuristic requires empiricism and disallows use of imaginary scenarios and fictional evidence. It privileges reductionist and causal arguments. It prevents many of the ways of misusing words. If I try to use a concept I don’t understand, drawing its steps out will help me notice that.
Downsides: Being able to draw well isn’t required, but it would help a lot. The method probably privileges anecdotes since they’re easier to draw than randomized double-blind controlled trials. Also it’s harder than spouting off and so won’t actually be used in Facebook political arguments.
I’m not claiming that a better argument-sketch implies a better argument. There are probably extremely effective ways to hack our visual biases in argument-sketches. But it does seem that under currently prevailing ordinary circumstances, making an argument-sketch and then translating it into a verbal argument is a useful heuristic for making a good argument.
As far as I understand CFAR teaches this heuristic under the name “Gears-Thinking”.
Does that name come from the old game of asking people to draw a bike, and then checking who drew bike gears that could actually work?
One thing you might want to consider is the reason people or posting on Facebook… usually, it’s NOT to create a good argument, and in fact, sometimes a good, logical argument is counterproductive to the goal people have (to show their allegiance to a tribe).
you might like www.yourlogicalfallacy.com
Can anyone think of a decision which might come up in ordinary life where Baysian analysis and frequentist analysis would produce different recommendations?
The core difference between B and F is what they mean by “probability.” If you go to the casino, the Bs and the Fs will interpret everything the same way, but when you go to the stock market, the Bs and the Fs will want to use their language differently. It seems likely to me that most of the uncertainties that show up in everyday life are things that Bs would be comfortable assigning probabilities to, but Fs would be hesitant about.
When it comes to an action you must structure your knowledge in Bayesian terms to use to compute an expected utility. It is only when discussion detached knowledge that other options become available.
??? This isn’t true unless I misunderstood you. There are frequentist decision rules as well as Bayesian ones (minimax is one common such rule, though there are others as well).
In what sense is minimax frequentist?
From Wikipedia:
ETA: While that page talks about estimating parameters, most of the math holds for more general actions as well.
I don’t think that “non-bayesian” is a common definition of “frequentist.” In any event, it’s not a useful category.
Philosophers are apparently about as vulnerable as the general population to certain cognitive biases involved in making moral decisions according to new research. Apparently, they are as susceptible to the order of presentation impacting how moral or immoral they rate various situations. See summary of research here. Actual research is unfortunately behind a paywall.
A paper “Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection” (Eric Schwitzgebel and Fiery Cushman) is available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Stability-150423.pdf
Very interesting, thanks for finding it.
The methods and statistics look good (feel free to correct me). However, I wish the authors would have controlled for gender. I don’t think it would significantly change the results, but behavioral finance research indicates that men are more susceptible to certain behavioral biases than women:
https://faculty.haas.berkeley.edu/odean/papers/gender/BoysWillBeBoys.pdf
Admittedly, “Boys Will Be Boys” addresses overconfidence bias rather than framing and order biases.
An interactive twitch stream of a neural network hallucinating. Or twitch plays Large Scale Deep neural net.
EDIT: Fixed link.
You’ve messed up the link, this is it
http://www.twitch.tv/317070
Some more links: the blog post and a ten minute sample that you put on youtube. I imagine that there are many people who prefer youtube to twitch. In particular, I like the 2x setting on youtube.
I’m amazed you found that video since I haven’t posted it anywhere yet. I’m still trying to figure out how to add more than 2 minutes of music to it.
I found it by putting the twitch title into the youtube search bar. I tried it because people copy all sorts of videos to youtube,
What do you all think of “General Semantics”? Is it worth e.g. trying to read “Science and Sanity”? Are there insights / benefits there that can’t be found in “Rationality: AI to Zombies”?
Science and Sanity contains a lot of good insights that aren’t in the sequences. The problem is that it’s not an accessible book. It hard to read and a substantial time investment.
Do you think this is an intrinsic property of the insights, or could someone compress the book in to something shorter, more readable, and almost as useful?
I don’t think the problem is that the book is long. It’s that it basically defines it’s own language and is written in that language. It’s similar to a math textbook defining terms and then using those terms.
It defines for example the term “semantic reaction” and then goes to abbreviate it as s.r The gist is that if you say something the meaning of what you say is the reaction that happens in the brain of your listener when he hears the words.
It’s not hard to understand that definition on a superficial level. On the other hand it’s hard to really integrate it. It’s a fundamental concept used throughout the book.
There is a paper out, the abstract of which says:
Before you go look at the link, any guesses as to what the [group X] is? X-/
I correctly guessed what X was. Because there’s only one thing it could ever be, unless the paper was talking about very unusual subgroups like Jehovah’s Witnesses in Mormon territory.
Well, it could be creationist zoologists, or satanist school teachers, or transgender fashion models. But of course it’s psychologists studying psychologists, and of course it’s reiterating an interesting narrative we’ve seen before.
One would expect creationists to be underrepresented in zoology for a number of reasons, only one of which is that zoologists have negative beliefs about creationists and tend not to hire or encourage them. Others would include that creationists may avoid studying zoology because they find the subject matter unpleasantly contradictory to their existing commitments; and that some people previously inclined to creationism who study zoology cease to be creationists.
Anecdotally, I know at least one creationist zoologist, although I don’t think he publishes creationist stuff. He doesn’t stand out at all or has any noticeable trouble because of it. All zoologists I know are weirder than the average person.
That’s an interesting observation, isn’t it?
Between the word “beliefs” (which rules out most demographic groups), the word “openly” (which rules out anything you can’t easily hide), and the existence of a plausible “anti-X” group (which rules out most multipolar situations), there’s not too many possibilities left. The correct answer is the biggest, and most of the other plausible options are subsets of it.
I suppose it could also have been its converse, but you don’t hear too much about discrimination cases going that way.
I think that ngurvfgf would have been a plausible X in some places (and perhaps the opposite in others), but the correct one was the first that came to mind and the one I considered most likely.
ROT13: V thrffrq pbafreingvirf pbeerpgyl, nygubhtu V’z cerggl fher V unq urneq fbzrguvat nobhg gur fghql ryfrjurer.
Cbyvgvpnyyl pbafreingvir.
I haven’t looked. Pbafreingvirf.
fbpvny pbafreingvirf
Sam Altman’s advice for ambitious 19 year olds.
I don’t know of Sam Altman, so maybe this criticism is wrong, but the quote: “If you join a company, my general advice is to join a company on a breakout trajectory. There are a usually a handful of these at a time, and they are usually identifiable to a smart young person.” Absent any guides on how to identify breakout trajectory companies, this advice seems unhelpful. It feels like: “Didn’t work for you? You must not have been a smart young person or you would have picked the right company.”
Paired with the paragraph below on not letting salary be a factor, I am left with the suspicion that Sam runs what he believes to be a company with a ‘breakout trajectory’ and pays noncompetitive salaries.
Now to find a way to test that suspicion.
I have read something like this on a rationalist blog somewhere. Basically it was a type of advice like “you want to win the race? well, just run fast! just put one foot in front of the other quicker than others do, d’uh!”
Maybe we need a name for this.
Sam Altman is the president of Y Combinator.
I think the way to look for a company on a breakout trajectory is to find a company that is growing fast and getting a lot of buzz but has not become established and is not thoroughly proven yet. Even better might be to find a company that’s growing fast but not getting a lot of buzz, but that’s probably trickier.
As the president of YC, he doesn’t really hire anyone, but he does fund lots of companies, and his advice could be interpreted as: work for a YC company.
The more precise cynical interpretation would be “work for a promising early stage YC company”. Note that he also could have told you to work for a late stage one or apply to YC in order to start one. But it’s probably true that working at a promising early-stage YC company is what would most benefit YC on the margin. (Although if what benefits YC most on the margin is what generates the most value, then generating more value for YC also seems like a good way to generate enough value that you capture a significant chunk.)
These types of advices are really not honest enough, I think. Let me try a honest one:
1) Move to America if you don’t already live there. Bluff your way through immigration officers and whatnot.
2) Move to the Silicon Valley if you don’t already live there. Deal with the costs of living there / outside your parents house anyhow.
3) Acquire enough money, lump sum or regular income that you can can focus on chasing shiny things for years without pay. Consider getting reincarnated in a well-to-do family, that helps.
4) Above is still true if you intend to join a company. Unless you want to join the kind of company where you are okay with HR drones keyword-buggering and credential-combing your resume and requiring 3 years of experience in technologies 2 years old, which is not really what the truly ambitious like to, those years will be spent on getting to know excellent founders, and making the kind of stuff on your own that convinces them to let you join them. I.e. chasing shiny things without pay before you can join the right kind of company.
5) Be a programmer, because there are very few professions where you can just casually build things as you see fit. As a programmer you get away with not having access to anything but your brain, the net, and a laptop. If you are e.g. a sculptor and your dream is to build a 50m tall Darth Vader statue out of bronze, well that is going to require some harder to acquire stuff as well. If you took all these building a bit too literally and you graduated in civil engineering, your chances of starting out on your own after graduating are nil, these kinds of startups don’t exist, and you will probably work 10 years at the construction equivalent of Microsoft before you can try to start out on your own and finally do something interesting. So be a programmer. Don’t like programming and computer technology so much? Still be a programmer or at the very least figure out real hard how to graduate in something that 1) can be made without really expensive inputs 2) scales up readily to serving many customers simply by gradually renting more stuff and hiring more people, staying ahead of cash-flow. (John, this MealSquares is an excellent example of a non-programming activity that satisfies these criteria. But imagine if your knack was for designing dams. People must invest a ton of money into building them so they will not hire a young nobody, and you can only design one dam at a time, it does not scale up to serving many customers simultaneously.) Have no idea what could be like this, beside programming? Be a programmer. Or a musician.
6) After all these preconditions are done, then you are ready to read Sam Altman and similar folks (Graham etc.)
Seeking writing advice: Tropes vs writing block?
I’ve started writing bits and pieces for S.I. again, but not nearly at the rate I was writing before my hiatus.
I’m beginning to wonder if I should cheat a bit, and deliberately leave some of the details I’m having trouble getting myself to write about vague, and explain it away with some memory problems of Bunny-the-narrator for that period. Goodness knows there are plenty of ways Bunny’s brain has been fiddled with so far, so it’s not without precedent; and if it gets me over the hump and into full-scale writing again, it might be worth including the trope for that reason alone, let alone adding another mental issue to play with narratively.
Anyone have any thoughts?
Would it maybe help, if you left some of the details vague at first, to get back into writing, and go back later to rewrite those parts?
That seems to be the default that I’m settling on. I’m jotting down the plot points I want to happen in such sections, marking them so I know that I have to go back to that, and working on whatever I /can/ get myself to work on in the meantime.
From the way things seem given your recent posts about struggling with getting words onto the page, I would suggest doing anything that actually gets you moving in that direction. If you are stuck on one particular bit, by all means skip it for now. Whether that means incorporating this into the narrative, or coming back later for clean-up, depends on the product itself (I haven’t read the work you are talking about).
A more general aside: I’ve found myself in a very similar position, finding it incredibly hard to put words on the page yet needing to do so more and more urgently. I’ve seen a few comments you made before about preparing optimal writing situations and planning for them—I did exactly the same and in retrospect it seems this was a bad strategy for me. Mainly because such preparations got me thinking more and more about providing an optimal situation for written productivity: in essence setting up small “writing retreats” now and again. This became a self-perpetuating loop of non-writing, because doing so provided perfect excuses for NOT writing at any other time.
A friend who is a (now retired) writer suggested that instead, I work on writing despite distractions, rather than constraining my writing effort to those situations where all distractions are minimised. In alternating weeks I tried the different techniques (A,B,B,A, where A=my old approach of writing in optimal situations and B=explicit attempt to write in distracting environments I wouldn’t consider suitable for “A”). It turned out that B>A both in minutes spent writing (+125%) and in wordcount (+160%). Quality of work under “B” might have been lower but I don’t seem to have a block in editing and revising, only in first drafting.
I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I’d ask...)
IDSIA / University of Lugano in Switzerland is where e.g. Schmidhuber is. His research is quite neural network-focused, but also AGI-focused. Also Shane Legg (now at DeepMind, one of the hottest AGI-ish companies around) graduated from Lugano with a PhD thesis on machine superintelligence.
“AGI but not machine learning or neuroscience or anything with neural nets” sounds a little odd to me, since the things you listed under the “not” seem like the components you’ll need to understand if you want to ever build an AGI. (Though maybe you meant that you don’t want to do research focusing only on neuroscience or ML without an AGI component?)
Zoubin Gharhamani / Carl Rassmussen (Cambridge)
Michael Osborne / Yee Whye Teh (Oxford)
Microsoft Research Cambridge
Just wondering why you don’t want to do machine learning? Many ML labs have at least some people who care about AI, and you’ll get to learn a lot of useful technical material.
A little while back, someone asked me ‘Why don’t you pray for goal X?’ and I said that there were theological difficulties with that and since we were about to go into the cinema, it was hardly the place for a proper theological discussion.
But that got me thinking, if there weren’t any theological problems with praying for things, would I do it? Well, maybe. The problem being that there’s a whole host of deities, with many requiring different approaches.
For example, If I learnt that the God of the Old Testament was right, I would probably change my set of acceptable actions very, very quickly. Perhaps another reasonable response would be to try and very carefully convince this God to change its mind about a couple of things, as though the God of the Old Testament is capable of change if I remember rightly.
On the other end of the spectrum, what about the Greek gods? Well, I think it would still be a good idea to try and convince them not be, you know, egotistical tyrants. Or failing that, humanity should probably try and contain them in some fashion, because who’d want someone like Zeus going about as they pleased?
And if Aristotle’s Prime mover were real… Well, I guess you’d just ignore it.
Anyway, I think Its a pretty interesting topic, if not a very useful one.
Any thoughts on how you’d react to any of humanities collection of deities?
Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don’t seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn’t worked well for me because it is missing features that would be good for learning (sporadic, not repeatable, can’t get feedback from someone who knows what they are doing on your performance, have a lot of things going on beyond the aspects you want to focus on, things can move on without giving you time to think, etc.). For example, this might look like a workshop that involved a significant amount of time pairing up with other participants and practicing small talk, with breaks in between to cool down, get feedback, and learn new tips to practice in later rounds.
There’s a number of “game” related courses that take this approach. Most of these programs involve going out, and continually approaching and interacting with people, with specific goals in mind.
There’s the connection course (This one is probably the closest you’re looking for, as he’s reworked it to remove all “gamey” stuff, and just focused on social interactions): http://markmanson.net/connection-course
There’s the Collection of Confidence: http://www.amazon.com/The-Collection-Of-Confidence-HYPNOTICA/product-reviews/B000NPXWT8
Stylelife academy: http://web.stylelife.com/
Ars Amorata: http://www.zanperrion.com/arsamorata.php
and a whole bunch more.
Edt: in my area at least, there’s also practice groups for Non-violent communication on meetup.com
The Rejection Game using Beeminder can be a good start for social skills development in general
If you’re interested in a specific area of social interactions then finding a partner or two in that area could help out. Toastmasters, pua groups, book clubs, and improv groups fall into this category.
Alternatively, obtaining a job in sales can take you far
My impression of Toastmasters is that it might be similar to what I’m looking for, but only covers public speaking.
Advice about picking in person training is location dependent. Without knowing what’s available where you live it’s impossible to give good recommendations.
Recommendations for in person training around the Bay Area would be useful (as I’m likely to end up there).
California is a good place. A lot of personal development framework come from California. It very likely that there are good things available in California that are not known outside of it. Asking locals at LW meetups for recommendations.
There seem to be regular Authentic Relating/Circling event in Berkeley: https://www.facebook.com/ARCircling We had a workshop in that paradigm at our European LW Community Event and it was well liked. I also attended another workshop in that framework in Berlin. Describing the practice isn’t easy but it’s goal is about having deep conversations with other people that produce the feeling of having a relationship with them.
I have spent multiple years in Toastmasters and wouldn’t recommend it if your goal isn’t being on stage. Toastmasters Meetings usually have 20+ people in a room and only one person speaking at a time. That means relatively little speaking time per person.
Toastmasters is also very structured. The ability to give a good 2 minute Table Topic speech for me didn’t create the ability to tell a funny story in a corresponding way in a small talk context. Toastmasters have a nice and fun atmosphere but it feels a bit artificial in a way that Circling isn’t. Trying to cut the number of “Ahm” in a speech by focusing on the “Ahm” instead of focusing on the underlying emotional layer is from my perspective suboptimal.
Bryan-san also gave the recommendation of attending PUA groups. It’s hard to really know the relevant outcomes. There are people who do have some success via that framework but it also makes some people more awkward. If you do PUA cold approaching you might get feedback from another PUA but you usually don’t get honest feedback from the actual woman with whom you are interacting. Authentic Relating on the other hand provides a framework that isn’t antagonistic.
PUA success varies by region and local culture. In some urban areas, anecdotally, women have started judging men’s PUA “game”.
I think it pattern-matches on a “correct” behavior, but is self-defeating; it pattern-matches on the idea that women, like men, want to have casual sex. The “correct” behaviors, are indeed, being something of a jerk, but is self-defeating because it assumes rudeness is the desired quality, rather than a signal of a desired quality: Jerks aren’t likely to pester you for follow-up dates, which is to say, they are actually interested in strictly casual sex.
It’s self-defeating, because as soon as men who are interested in more meaningful relationships start utilizing the technique of being a jerk, being a jerk stops being a useful signal of -not- being interested in more meaningful relationships. (Being -very good- at being a jerk, on the other hand, probably -does- pattern-match pretty well with interest in strictly casual sex, hence the anecdotal accounts of women judging PUA “game”.)
The whole thing gets messier on account of individual differences. Some women want to be hit on, some don’t, some want one approach, some want another, some are receptive to the idea of longer-term relationships, some aren’t—in short, women are people, too. No single “framework” is going to accommodate everybody’s desires, and those who push a monoculture ideal are being narrow-minded. And dating signaling is, frankly, terrible, and often abused, intentionally or unintentionally. (Women signaling desire for casual sex to get free drinks, men signaling desire for long-term relationships to get casual sex, for two of the common complaints.)
Getting outside that, my personal practice is to strike up random conversations with strangers; small talk is the grease the gets conversation going. Treat small talk as a skill with a toolbox of techniques. Your toolbox should contain a list of standard questions for strangers; what do you do for a living, who are you rooting for in (current sports competition), where were you born, how did you end up in this hellhole, etc. The more you do it, the better you get, or at least the more comfortable. Small talk with other smokers while smoking helped my conversational abilities immensely, although for obvious reasons I wouldn’t necessarily advocate that.
The problem is not only about the woman but about the man. Quite many man who go into PUA never end up in a state where they are comfortable striking up random conversations with strangers.
Recently I went to a local “get out of your comfort zone” meetup in Berlin lead by someone who authored a book on comfort zone expansion and who has a decade in the personal development industry. Surprisingly we didn’t went out to start conversations with strangers. His main argument against going down that road was that it often makes people without previous experience often experience those exercises in a disassociated way instead of in an associated way.
PUA quite often leads to people trying to influence the woman instead of paying attention to their own emotions and dealing with those emotions in a constructive fashion.
It’s certainly possible to have toolbox smalltalk and do okay with it. Developing genuine curiosity for the other person and letting that curiosity guide your questions is both more fun and more likely to create a connection.
I’m not advocating monoculture. I also don’t think nobody should do PUA. It’s just worth noting that PUA doesn’t deliver for many people who buy into it.
The toolbox gives you a starting point; it’s not meant to be the entirety of the conversation, but rather starting points. It’s relatively easy to maintain a conversation, harder to start one. Curiosity doesn’t begin until you have something to be curious about in the first place.
I agree that PUA doesn’t give people what they’re looking for, most of my comment was intended to explain why. (Short summary: It’s about sex, not conversation.)
When standing at a bus stop are you asking a stranger: “What do you do for a living?” To me that doesn’t seem like a good conversation starter.
“Do you know in many minutes the bus will arrive” can be a curiosity based question, that’s socially acceptable to ask. I’m standing next to a stranger and that question comes into my mind, I notice that I have a question were I’m interested in the answer. I can either look at my phone and look at the bus timetable to figure out the answer or I can ask the other person.
There are many instances like that were you can choose the social way to deal with the situation.
I think even for people who think they want sex, it often doesn’t deliver on it’s promise.
The reason women who want causal sex are attracted to Jerks isn’t because they aren’t likely want follow up dates, it’s because if getting the father to help raise the kids id out of the question, you want the best possible sperm. Granted today the women is likely to use a condom or abort because she doesn’t want children, but that’s adaptation execution for you.
Are you an evolutionary strategy? Do your preferences all reduce down to evolutionary strategies?
My preferences are shaped by my genes (which were shaped by evolution), and my experiences as interpreted by the systems built by my genes.
A subset of Speech Therapy (especially for Autism Spectrum) covers exactly this sort of thing. I rather doubt it’s what you’re looking for, even if it’s an option, but it fits what you described almost perfectly. The major issues would be the tendency toward a more clinical setting, only being an hour or so a week, the limited pool of people to practice with, and establishing your existing skills.
Sometimes career centres at universities or community colleges have workshops to practice job interviewing and networking. You could see if there’s something like that near you.
Can we look at Orbán’s Hungary as a real-life laboratory of whether NRx works in practice?
I recall reading somewhere (slatestarcodex, I think) that the neoreactionists have three main strains: ethnocentric, techno-futurist/capitalist, and religious-authoritarian. In light of that, I wonder if Israel isn’t a better example than Hungary. Israel is technologically advanced but also a strict ethnostate with some theocratic elements. Like Hungary, Israel is probably too democratic to really qualify.
The problem with Israel is that the religious elements are based on a religion still optimized for exile rather than being a national religion. They still haven’t rebuilt the temple, for crying out loud.
Why do think Orbán’s Hungary is a good example of NRx ideas implemented?
Example and example.
So why not Putin himself? Or the Belorussian guy? Or any of the Central Asian rulers? If the criterion is rejection of liberal democracy, why not China?
Those countries were never very liberal to begin with, so their departure from Western values doesn’t look like what the experiment needs. Hungary, on the other hand, has a solid history of resistance to totalitarianism that only in the past half decade has had to face the threat of dictatorship.
There is more to NRx than just giving up liberal values. For example, Hungary still has elections that this guy has to win, so I guess they would still classify the country as “demotist”.
When they make a revolution, abolish democracy, declare Orbán a hereditary king, and possibly when he hires Ernő Rubik as a Chief Royal Scientist to solve all country’s problems, then we’ll have a good example.
AFAIK NRx are quasi-libertarians in the Hoppean sense (or Pinochetian sense), who want to use political authoritarianism for economic libertarianism largely. Orban is pretty much the opposite—economic statist, on a nationalist basis. Socially they can be similar but economically not. Orban is closer to US palecons like Pat Buchanan who are not full believers of free markets, they accept economic intervention just not on a left-wing / egalitarian basis, but a nationalist-protectionist basis e.g. not shipping jobs abroad.
I admit this is a bit complicated, because economic libertarianism and illibertarianism meshes with different ideologies depending on what aspect of non-intervention they focus on. For example those US righ-wingers who focus primarily on low taxes and social spending, are closer to Orban, those want all kinds of spending low not just social are not so close, those who focus on free trade are far away from him, and those who focus on privatizing things are the farthest—Eastern European right-wing tends to be anti-privatization because privatization tends to lead to foreigners acquriing things and it does not mesh with their nationalism well.
It’s a bit complicated.
But I see the primary difference as Orban is playing the man-of-the-people role, talks about a “plebeian” democracy, asks voters frequently about their opinion of issues, so he would be an NRx “demotist”, he plays that role of the Little Guy against liberal elites type of thing that is closer to perhaps Tea Party folks. In short, far more anti-liberal than anti-democratic, he plays more of the role of a rural conservative democrat against aristocratic liberal elites, and his primary goal seems to be strengthening the national state against international liberal capitalism. He is very much the anti-Soros, and that is explicit (there are few people the Eastern European Right hates more than George Soros, and both because of his liberal views and capitalist exploits).
European terminology tends to call this all populism. Anti-liberalism both in lifestyle and economics, focusing on the working class guy who is both anti-capitalist and conservative/traditional in lifestyle, with a rural tinge.
And I don’t think populism and NRx would mesh well unless I really ignored a big aspect of NRx but e.g. Anissinov looks like an anti-populist pro-aristocrat to me.
I’m not quite a NRx but from what I hear about him I like Orban.
As long as you don’t care much about economic libertarianism, privatizing all the things etc. but only social conservatism, you can be on the same page.
Admittedly, the whole economic libertarianism thing is different in the center vs. peripheria of globalization. In the center, such as the US where businesses are owned by people of those countries, anti-libertarianism usually means egalitarianism. In the peripheria, where businesses are usually foreign-owned, anti-libertarianism usually means economic nationalism, protectionism. The later is culturally far more palatable for culturally conservative people, but Rothbard types would still be disgusted by it.
BTW you see the same story on a far larger and transparenter case in Russia. Classical liberalism / libertarianism is equated with Yeltsin and that equated with selling all the things to foreigners and his memory very much hated on the Russian Right. They may be down with those types of libertarianism that is mostly about tax cuts, but they really draw lines at not letting foreigners get a lot of economic influence. (Not that Yeltsin was anywhere near being a principled libertarian—he just really liked selling things. I think the only principled libertarian to the east from Germany is Vaclav Klaus.)
It’s still a democracy which has elections that the OCED can inspect Hungary. It’s also still a member of the EU. That means it’s subject to all sorts of legislation from Brussels and action by the EU Court of Human Rights.
It seems like Hungary has to pay billions to the churches due to a verdict of the EU Court of Human Rights.
this was an unhelpful comment, removed and replaced by this comment
If you just want a basic “display information” website, go with wordpress.
If you’re looking to do a full web-app, I’d recommend either Drupal, or Wordpess with the Toolset plugins.
Wordpress is open source. That’s a good thing, and important.
I’ve mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.
I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW’s “reductionist AI” approach to AI in a negative light compared to their “neural network paradigm”.
These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?
There isn’t really a “LW approach to AI,” but there are some factors at work here. If there’s one universal LW buzzword, it’s “Bayesian methods,” though that’s not an AI design, one might call it a conceptual stance. There’s also LW’s focus on decision theory, which, while still not an AI design, is usually expressed as short, “model-dependent” algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.
As to whether there’s some tribal conflict to be worried about here, nah, probably not.
I think this sums up the problem. If you want to build a safe AI you can’t use neural nets because you have no clue what the system is actually doing.
If we genuinely had no idea of what neural nets were doing, NN research wouldn’t be getting anywhere. But that’s obviously not the case.
More to the point, there’s promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).
Furthermore it’s not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it’s a neural network or not.
Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar’s conditions are.
If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.
The likelihood of success of establishing a human colony depends on the timeframe.
If there’s no major extinction event I would be surprised if we don’t have a human mars colony in 1000 years. On the other hand having a colony in the next 50 years is a lot less likely.
Can anyone help me understand the downvote blitz for my comments on http://lesswrong.com/lw/mdy/my_recent_thoughts_on_consciousness/ ?
I understand that I’m arguing for an unpopular set of views, but should that warrant some kind of punishment? Was I too strident? Grating? Illucid? How could I have gone about defending the same set of views without inspiring such an extreme backlash?
The downvotes wouldn’t normally concern me too much but I received so many that my karma for the last 30 days has dropped to 30% positive from of 90%. I’d like to avoid this happening again when the same topic is under discussion.
You note: “I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I’m not sure if this post adds any value.”
Many readers (myself included) are already familiar with these sources, and so the post comes across as unoriginal. It is basically you rephrasing and summarizing things that a lot of people have already read. In other words, it’s probably not that people are downvoting to disagree, but because they don’t see a response-journal reiterating well-known views as a good Main post. It’s not “Go away, you are not smart enough to post here!” but “Yes, yes, we know these things; this particular post here is not news.”
The post has far too much “I think”, “I realized”, “it seems to me” language in it. It’s your post; of course it is about what you think. In conversation those kind of phrases are used to soften the impact of a weird view strongly stated, but in writing they make it sound like the writer is excessively wrapped up in themselves.
(On the other hand, if the important part is the sequence of your realizations, then present the evidence that convinced you, not just assertions that you had those realizations.)
While different language communities have different standards for paragraph length, by the standards of current Web writing, your paragraphs are often way too long. To me, long block paragraphs come across as “kook sign” — that is, they lead me to think that the writer’s thinking is disorganized.
I am not the OP of the thread I linked to. Most of the downvotes I received (in the comments) of that post have been reversed. Thanks for replying though.
Ah, oops. Indeed, I thought you were the poster and were asking for an explanation of the downvotes to the post.
If someone on LW mentions taking part in seriously illegal activities (in all jurisdictions), am I morally obliged to contact the police/site admin? I don’t think the person in question is going to hurt anyone directly.
Speaking of which, who is the site mod? Vladmeir someone?
EDIT: I think I misunderstood and the situation isn’t bad enough to need reporting to anyone. He was only worrying about whether he wanted to do certain things, rather than actually doing them.
NancyLebovitz is the newest moderator at present.… and I believe the only really active one at least in day-to-day operations. Viliam_Bur was previously in that role but he backed off in January due to other time commitments.
There is a moderator list here
I hope not (though your morality is your morality, of course). Bringing in the cops into an online discussion is very VERY rarely a proper thing to do, IMHO.
This was a very serious matter—I would not have considered calling the police for most things.
I didn’t see the post I believe you’re referring to before the author redacted it, but for me the line would be real danger to other people, which you say you don’t think is the case. It any case, it would be best to go through the mods first. A pseudonymous post recounting deeds in (perhaps) unspecified places and times isn’t something the police can work with. Also, summoning the police is not to be done lightly, for once summoned, no power can banish them whence they came.
I have thought about it, and contacting the police straight away would only be the right thing to do if there was some imminent danger. I probably wouldn’t have mentioned it, except that it was the sort of thing which can pose an indirect threat or lead to behaviour which does hurt other people.
Anyway, it appears I got the wrong impression anyway, and he was only obsessing over the hypothetical possibility of doing things, rather than actually doing them. So this is one of the times when its good that I didn’t impulsivly do the first thing that popped into my head and instead stopped to think about it.
I saw the post; it was a mix-up.
You may or may not be legally obligated (although this obligation is not realistically enforceable by law); as for morally obligated, it depends upon the nature of the act. There is imperfect overlap between things defended by law and by morality. If we’re talking piracy or jaywalking or buying modafinil on the black market, you may be overexerting your civic powers here, and your conscience can relax. If the matter at hand involves violence, and you can expect to save some people through your report, then maybe it’s better to get involved. For all matters in-between, both approaches may be valid in certain proportions, so apply common sense.
It was a misunderstanding, but for the record, what I thought was going on was far worse than buying modafinil, or any other illicit substance, and sort of indirectly involves violence.
A site mod theoretically has IP addresses that allow him to pick the right country when reporting to the police. As a result it makes sense to report to a mod.
If you would contact the police and then they contact the side mod, things can get more messy when the mod doesn’t reply fast enough.
Economics, bias and fallacies.
I thought this comment was pretty good.
yep. someone else downvoted this. I agree in downvoting because of the lack of description or information given with the random link.
Seems to work fine for reddit...
If I wanted to be on reddit I would be on reddit.
“Nobody made a greater mistake than he who did nothing because he could do only a little.”—Edmund Burke
If that’s a related quote you should say so; if that’s a meta comment about my comment you should also say so.
Downvoted this: I could give you an equally pretty sounding quote about walking blindly forward or repeating teachers passwords but I am too lazy to find a link. This is the Internet; don’t be cryptic, be obvious, be helpful and be clear.
How about this description:
And the most important part of sharing a link:
That’s fucking teamwork.
Regarding the recent development in the US. To me it seems marriage shouldn’t be part of the legal system at all. If anything, legal marriage is a legacy of the days when women were treated as chattel.
EDIT: I dont know why this comment received downvotes, but maybe some readers took it to be criticisim of same sex marriage. That can hardly be further from the intent! Allowing same sex marriage is a great improvement that I applaud, but abolishing legal marriage entirely would be even better.
EDIT: I was asked to provide reasoning for my position. Well, it seems to me that in some sense the burden of proof is on the other side. In general, the less complexity we have in the legal system the better. IMO the primary reason marriage is a legal status is historical: in previous eras, a woman who entered into marriage became something close to a slave of her husband, and this relationship was legally important for approximately the same reasons property rights are legally important. Nowadays there are all sorts of laws associated with marriage but IMO they are all better implemented differently. To put in other words, if we had to reinvent the state without relying on history, I see no reason we would have invented legal marriage at all.
Practically speaking, even after resolving the issue of same sex marriage we still have the issue of polyamorous marriage. And trying to legalize it would lead to all sorts of complications (How do we formalize polyamorous marriage? Is it a graph? Is it a system of subsets? Are the subsets disjoint?) It seems much simpler to get rid of the entire concept.
Of course, if I’m missing some really good arguments why we should have legal marriage after all, I would be glad to hear them.
EDIT: To clarify, my comment’s intent was starting a discussion rather than stating a final verdict on the subject. Obviously a well-grounded conclusion would require a much deeper analysis than the few paragraphs above.
You voiced a political opinion on LW and you provided no proper reasoning for why other people should agree with you. You didn’t steelman the opposing site and showed why you think they are wrong.
Fair enough. See edit.
If it was Reddit, I would reach for the downvote button. Since it is not, I will try to make a shot why this type of argument is problematic. The Past is a big place, ranging from the beginnings of written history to the recent minute and over the planet. While on the conservative side there is an equally erroneous tendency to glorify the whole of this range, often on the progressive side there is an equally erroneous tendency to vilify the whole of it. These tendencies come from various philosophies of history, “kali yuga” in the first case and “whig theory” in the second case, your one. This—both—simply puts the politics of today into an unrealistic perspective. Both errors set up the mood of discussing political changes into a distorted “one more step away from our glorious past” vs. “one more step away from the horrors of our past”.
Your example illustrates this meta problem excellently. The last time I remember men were actually allowed to sell their wives to slave traders was Pagan Rome. What matters of the past for current politics is largely the last 250 years of largely western nations, so the Enlightenment era, where none of the actual characteristics of slavery were present in marriage. What there was instead is broadly the status of women in marriage as minors, not slaves, i.e. comparable to children but even that was changing was early as 1809-1848 in the US and in similar developed nations. So plain simply in that kind of past that matters, that is relevant, because it affects the present through the weight of being an established tradition it is not so. None of your grandmothers even remembers what it was like to not own property in a marriage and similar things. Non-equality does not imply being a slave unless you felt like a slave at 17.
You seem to define slavery as the right to sell slaves. This is usually called “chattel slavery” because it is a very small fraction of all the people called “slaves” throughout history.
It is true that a Roman husband had great rights over his wife, but that has nothing to do with marriage. The husband simply assumed the rights previously held by the father, the same rights the father had over his sons.
This is true but also true that non-chattel slavery used to have a lot of other names as well, serfdom, indentured servitude etc. I generally don’t know many examples where non-chattel slavery did not have some other name as well.
No, I am not talking about serfs and indentured servants. I am talking people called slaves. Almost every example where you think slaves are chattel is because you are wrong about history. For example, the great diversity of slaves in the Bible are not chattel.
I mostly agree with the object level statements. IMO an adult treated a minor qualifies as “something close to a slave”, but let’s not argue over terminology.
The problem of your view is that you really see marriage as being about the people who marry. In reality it is largely about their children. Even gay marriage is seen as a way to pave the way to allowing adoption / surrogate parenthood and thus enabling gays to have full families, including children, although not necessarily biologically theirs—at least that is part of the story, although using it as a vehicle for social validation, and some weird US-specific rules like hospital visits play a role too. While childfree and old people marry too, this is broadly the same as eating ice-cream vs. actually eating a meal. The meat is missing. Which does not mean that it should not be allowed, because just why not, but it does not mean either that it is valid to see marriage as an institutionalization of a relationship of adults and see how could we make better institutions for this? But marriage is not for adults primarily. For adults the whole thing is simple—ideally everybody should be able to marry but people who are dedicatedly childfree should probably realize there is no good reason to. There is hardly any good reason for two modern, income-earning people to pool resources unless one of them is becoming a housewife / houseman and really the only good reason to do that ever is children otherwise you are just being a maid. The primary thing marriage is optimized for is children. I predict most gay couples who bother about the whole marriage thing intend to adopt or have a surrogate child. Otherwise there would be little point to.
Gay marriage does not hurt children but abolishing marriage would. It would be one step towards making it less and less sure that children will always have their mother and father, and their property, around.
The answer to poly marriage is that first figure out how to sort out parenthood and then you will have your answer. If you would see it as an “it takes a village to raise a child” kind of setup, sure, just consider it a group thing, everybody pooling their property for the sake of raising children, no matter who is the father or the mother. I think Robert Heinlein proposed this in The Moon Is A Harsh Mistress in 1966. However if you think even in a poly thing primarily the two biological parents would be responsible for the children, things could really get a bit complicated.
In short, I think you really need to update the view that marriage is about two or more adults for some weird reason wanting to make their love institutional. No, it is primarily about children, own or adopted, although due to the social customs associated to it it is often used for other purposes, but that is not the main purpose.
I should add that in the wedding ceremony where my wife and me are from this is halfway explicit. After the promise we gave our parents flower / wine thanking them to raising us—this can be seen as the childhood being over (at 34 it was about time) and now we are going to take up the mantle of becoming parents and continuing the family lines. During the dinner and party, people kept asking when do we plan the first kid. So the generic mood was “nice you guys chose to reproduce” and not something like “nice you guys made your love public”. I don’t need to make my love public and I could do that without wearing a ridiculous penguin costume...
I don’t understand why. Possibly you misunderstood me: I was arguing for abolishing legal marriage, not abolishing the cultural institution of marriage. I am not legally married to my wife, we have a 5 year old son and everything seems to be ok.
It does not make much of a difference. In the jurisdictions I am familiar with, cohabitation esp. with a child is practically interpreted as marriage, such as in case of separation commonly acquired property gets split etc. Let me ask, precisely what aspect of legal marriage you object to? Because there is a chance your cohabitation already has that legally.
No. If you call for the abolition of an significant public institution you have to provide proof.
You haven’t shown how handle every single aspect in which marriage is involved by a new rule will reduce complexity.
That’s argument from ignorance. “I’m to stupid or uninformed about the subject to think of arguments for the opposing site” is not something that should encourage people to adopt your position. Or to reference the sequences: Policy Debates Should Not Appear One-Sided
I don’t think we are going to make progress without going to object level.
On LW rational debate is a core goal. How to reason about political issues matters more than the question of whether or not marriage should be abolished.
Posts that advocate for good political ideas but do so in an irrational way have no place on LW.
Rational debate is indeed a core goal. Object level arguments are essential to rational debate in most cases. Avoiding ad hominem subtext is also important.
You said that you can’t think of any reason. I can’t address that without using the word “you”.
There are indeed two options:
1) You didn’t put enough time into understanding the subject.
2) You lack ability to understand it.
Okay maybe there a third:
3) You lied about not seeing any reason
“Argument from ignorance” does not have any place on LW. Discouraging it by calling it out is valuable. It’s not something that should stand unchallenged. Not just because it’s wrong, but for garden purposes. It lowers the quality of the debate.
I disagree. It seems to me completely rational to say “Guys, why are we doing X? It looks like there was a reason why we were doing X before but the reason is irrelevant by now and we still keep doing it. Since I see no reason to keep doing it, I suspect it is pure inertia and we should stop doing it. If there is a reason I missed, please point it out”.
Imagine you start working in a software company, and you discover the codebase is a jumble of spaghetti. You say “what is going on here? why don’t we remove all of this legacy code?” and the other person goes “this is a arguing from ignorance, the fact you don’t know why we need this code doesn’t mean there is no reason!”.
Instead the other person should have either
a. Agreed that we need to schedule refactoring
or
b. Explained the reasons why we need all this complex code
And in case b it might still turn out the reasons are mere rationalisations i.e. the code would never have been written this way if we wrote the system from scratch. Or not. But establishing which requires an actual object level debate.
https://en.wikipedia.org/wiki/Wikipedia:Chesterton’s_fence
http://unenumerated.blogspot.com/2012/08/proxy-measures-sunk-costs-and.html
I think the true issue here is that you may not have much of a trust in other people’s rationality. In this example you sound like you work from the assumption that they have no reasons at all, while in your marriage opinion it sounds like people of “previous eras” (too unspecific) had largely unethical reasons (marriage-as-slavery).
Well this sounds like me when I was 20 :) But what I have learned since is that it is better to assume people are not stupid and not evil unless evidenced otherwise. Now of course this sounds entirely trivial, but at 20 I did not realize the full extent of that principle of charitability. Namely that this also implies that may have entirely valid reasons of which I am entirely ignorant of, and that implies I am not as smart and knowledgeable as I like to think. I had to realize the whole chain of it. Starting from liking to think I am smart and knowledge, when I was younger I too easily went to thinking if I don’t understand the reasons for a thing then there aren’t any or no good ones just stupid or evil ones, and this led to me ignoring the principle of charity and implicitly thinking other people are stupid and / or evil.
Antoher thing I learned since that reasons are not always explicit. I learned to accept reasons like “because we tried stuff, and this one worked, we have no idea why but it did”.
I do not assume other people are stupid or evil. However, in this particular case my best current hypothesis is that the reasons are mostly historical. That said, I will gladly update on information to the contrary.
It’s rational to say that about a topic that you don’t understand. It’s no sin to not put significant time into understanding every topic one wants to speak about and asking other people for insights.
If you start working at a company you are ignorant about why the company is acting the way it is. If you are starting at a company you haven’t put significant time into understand it’s inner workings.
You didn’t focus on asking a question. Your posts doesn’t contain any question marks expect in the part about polyamous marriage.
There a huge difference between: “I don’t know why we do X, we shouldn’t do it.” and “Can you please explain to me why we do X?”
Fair enough. See edit.
When all is well and people are living peacefully and amicably, you don’t really need the law. When problems come up, you want clear laws detailing each party’s rights, duties, and obligations. For example, when a couple lives together for a decade while sharing assets and jointly building wealth, what happens when one party unilaterally wants to end the relationship? This situation is common enough that it’s worth having legal guidelines for its resolution.
The various spousal privileges are also at issue. Sure, you can file all kinds of paperwork to grant the individual legal rights to a romantic partner. At this point the average person needs to consult an attorney to make sure nothing is missed. What happens when someone doesn’t? You can expedite the process by drafting a special document that allows all these rights to be conferred as part of a package deal, but now you’re on the verge of reinventing marriage.
The legal issues surrounding the circumstances of married life will still remain whether marriage is a legal concept or no.
But if different groups (e.g. different churches or other kinds of organizations) hired lawyers to prepare different standardized packages, they would be able to offer different kinds of contracts that would correspond to that group’s concept of marriage. That would give an individual more freedom to choose and would make it unnecessary to solve issues of non-traditional marriages at the political level, and I think that making things less political is usually a good thing.
Of course, there would be more legal paperwork, and, as you’ve mentioned, there are various risks related to that, in addition to other things.
The legal issues remain, but I see no reason to delegate them to the government. The people involved should be able to come up with any contract they like, regardless of their gender, number or the nature of their relationship. After all, we don’t have special legal status for relationships between landlord and tenant, employer and employee etc.
Do you think that a state shouldn’t give spouses special immigration rights? What about spousal rights when it comes to making medical decisions for an incapacitated partner?
Regarding medical decisions, I agree with Sarunas: one should have the ability to assign this right to anyone.
Regarding immigration rights, it seems reasonable to take romantic relationships and even more so common children into consideration when granting such rights. I’m not sure we gain anything here by having a legal status called “marriage”.
It is not strictly necessary that all these rights should go to the same person, neither it is necessary that such rights have to be related to marriage. It is simpler that way, but it does not seem to be strictly necessary. For example, a person could designate another person (whom they trust and who doesn’t have to be their spouse, e.g. it could be a sibling, a parent, or simply a friend they respect) to make medical decisions in such cases and that would be analogous to a testator being able to name an executor of his/her will. If in a similar way other legal things that are currently associated with marriage were decoupled from it and each such right or duty would go to a designated person (not necessarily the same in all cases), marriage wouldn’t require any government involvement.