The mammogram problem is different because you’re only trying to determine whether a specific woman has cancer, not whether cancer exists at all as a phenomenon. If Bob was abducted by aliens, it implies that alien abduction is real, but the converse isn’t true. You either need to do two separate Bayesian updates (what’s the probability that Bob was abducted given his experience, and then what’s the probability of aliens given the new probability that Bob was abducted), or you need a joint distribution covering all possibilities (Bob not abducted, aliens not real; Bob not abducted, aliens real; Bob abducted, aliens real).
clone of saturn
I would add
Conflict theory vs. comparative advantage
Is it possible for the wrong kind of technological development to make things worse, or does anything that increases aggregate productivity always make everyone better off in the long run?
Cosmopolitanism vs. human protectionism
Is it acceptable, or good, to let humans go extinct if they will be replaced by an entity that’s more sophisticated or advanced in some way, or should humans defend humanity simply because we’re human?
You’re equivocating between real economic costs and nominal amounts of money transferred. Most of that $4 trillion is essentially fictional, taxed back again as soon as it’s paid.
I’m not in Berkeley and I have no direct knowledge of Berkeley parties, but a certain level of contempt or revulsion toward e/acc seems pretty universal among the LW-aligned people I know. I have no reason to doubt that there’s no explicit rule against e/accs showing up at Berkeley parties, as others have said. I personally wouldn’t feel entirely comfortable at a party with a lot of e/accs.
I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable, not because of the specifics of what regulatory actions they support. So how do you feel about human extinction?
I use eBay somewhat regularly, and I’ve found that most of the time I get what I expected at a reasonable price. So I find the theory that I should always regret participation in any auction somewhat dubious.
I think the distinction is that even for plant cultivars and pharmaceuticals, we can straightforwardly circumscribe the potential danger, e.g. a pharmaceutical will not endanger people unless they take it, and a new plant cultivar will not resist our attempts to control it outside of the usual ways plants behave. That’s not necessarily the case with an AI that’s smarter than us.
As a control, you could look at Craigslist, which hasn’t changed its appearance for about 25 years, but is still the most popular website in its category according to SimilarWeb.
I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.
I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.
It seems like anger against the exact kind of neoliberal technocracy you propose was a major source of energy for the 2016 Trump campaign, as well as the highly contentious Sanders campaigns.
After carefully considering your arguments, I’ve decided that you are right. Therefore, I won’t update my current belief that I should sometimes update my beliefs.
Why would a sane person want to use DMT?
Unfortunately, I think the tribalization and politicization is caused by the share-with-followers social media model, not by specific words, so using or not using the word “doomer” will have a negligible effect on the amount of tribalization. You just have to accept that people who insist on using Twitter will have their sanity eroded in this way, and do what you can to compartmentalize the damage and avoid becoming a target.
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
I feel like I’m still the same person as I was before I learned how many humans were born earlier than me. I think that’s all you need for the Doomsday Argument to go through.
AI should never have rights. Any AI that would have moral patienthood should not be created.
It doesn’t bother me, because I’m me, with the propensity to make the choices I’m determined to make. If I had chosen otherwise, I would not be me.
Suppose I love chocolate ice cream and hate vanilla ice cream. When I choose to eat chocolate ice cream, it’s an expression of the fact that I prefer chocolate ice cream. I have free will in the sense that if I preferred vanilla instead, I could have chosen vanilla, but in fact I prefer chocolate so I won’t choose vanilla.
Ok, I edited the comment.
None of these seem like actual scissor statements, just taking a side in well known controversies using somewhat obnoxious language. This seems to be a general property of RLHF trained models—they are more interested in playing up an easily recognizable stereotype somehow related to the question that will trigger cognitively lazy users to click the thumbsup due to the mere-exposure effect, than actually doing what was asked for.