Open thread, Mar. 14 - Mar. 20, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- 20 Mar 2016 20:13 UTC; 2 points) 's comment on Open Thread March 21 - March 27, 2016 by (
This post by Eric Raymond should be interesting to LW :-) Extended quoting:
There is much more to autism than that. It’s just one thing that’s easy for neurotypicals to notice.
Of course, but Eric Raymond is not giving a comprehensive overview of autism, he is just making a single point.
This idea of having more “bandwidth” is tempting, but not really scientifically supported as far as I can tell, unless he just means autists have more free time/energy than neurotypicals.
I think he means hyper-focus, basically.
This might turn out to have socially damaging implications once we figure out how to do genetic engineering if parents select against their future children having “autistic” genes.
What is “this”?
If genetic engineering of future-kids becomes widespread, I expect to see a significant lessening of diversity. Most everyone will be Brandy and Clint. On the other hand, weird people will become REALLY weird :-/
Simple hypothesis relating to Why Don’t Rationalists Win:
Everyone has some collection of skills and abilities, including things like charisma, luck, rationality, determination, networking ability, etc. Each person’s success is limited by constraints related to these abilities, in the same way that an application’s performance is limited by the CPU speed, RAM, disk speed, networking speed, etc of the machine(s) it runs on. But just as for many applications the performance bottleneck isn’t CPU speed, for most people the success bottleneck isn’t rationality.
It could be worse. Rationality essays could be attracting a self-selected group of people whose bottleneck isn’t rationality. Actually I think that’s true. Here’s a three-step program that might help a “stereotypical LWer” more than reading LW:
1) Gym every day
2) Drink more alcohol
3) Watch more football
Only slightly tongue in cheek ;-)
Strongly disagree with 2) and 3). I think you mean them as a proxy for ‘become more social, make more connections, find ways to fit in a local culture’, but quality of connections usually matters more than quantity. But in many circles that are likely to matter for a typical LWer 3) is likely to be useless and likely benefits of 2) are achievable without drinking or with a very modest drinking.
My advice was more like “get in touch with your stupid animal side”. The social part comes later :-)
Then living in a wilderness and cutting trees would be much better. Or some kinds of manual work where you can see the fruits of your labor, e.g. gardening. I believe that activities like these would be better suited for connecting mental and physical parts of a person.
I don’t know about yours, but my stupid animal side is uninterested in alcohol and football. It wants to eat, sleep, fuck, and harass betas :-D
Drinking alcohol is very necessary for connecting with people. People who are against alcohol don’t know much they miss out at times.
“I drink to make other people more interesting”—Ernest Hemingway
I think that depends very much on the kind of people with whom you hang out. There are people who need alcohol to open up. On the other hand there are people who have no problem opening up without alcohol.
This is so obviously wrong.
Alcohol may aid in connecting with some people some of the time.
This is just what nerdy types tell themselves and they come up with all these rationalizations for it, most peoples skillsets don’t lend themselves for that type of socialization. These people just realize they were wrong years later when it’s much too late.
I recommend trying “placebo alcohol”. That means, getting drunk for the first time, to get an experience of what it feels like, but to have a non-alcoholic drink the next time and merely role-play being drunk.
This is the exact sort of community that would delude themselves in exactly this department and would never stop arguing(not saying you do this), but if someone told me “Can you have fun/meet people without drinking”, I would say “sort of, but you’re better of just participating anyways”.
When you drink with friends you learn why you were wrong, there’s always going to be just that “one guy” who thinks he knows better though.
What do you mean?
I think at some point in time a few years ago there seemed to be an implicit assumption on LessWrong that of course you can hack your determination, rewire your networking ability and bootstrap your performance at anything! And I don’t think people so much stopped believing that this was true in principle, but rather people started realizing how incredibly difficult and time consuming it is to change your base skillset.
Well, there’s also the possibility that people who did successfully hack their determination, networking ability, and performance are now mostly not spending time on LW.
If that’s true, then many rationalists actually do win.
Depending on what is their goal. There are many possible levels of “winning”.
You can win on individual level by quitting LW and focusing on your career and improving your skills. People who achieve that can easily disappear from our radars.
You can win on group level by creating a community of “successful rationalists”; so you are not only successful as a lonely individual, but you have a tribe that shares your values, and can cooperate in effective ways. We would probably notice such group, for example because they would advertise themselves on LW for purposes of recruitment.
And then you can win on civilizational level, by raising the planetary level of sanity and building a Friendly AI. Almost sure we would notice that.
Okay, the third one is outside of everyday life’s scope, so let’s ignore it for now.
I don’t know how much I am generalizing here from my own example, but winning on an individual level would now feel insufficient for me, having met rationalists on LW website and in real life. If I could increase my skills and resources significantly, I would probably spend some time trying to get others from the rationalist community on my level. Because having allies I could achieve even more. So I would probably post much less comments on LW, but once in a while I would post an article trying to inspire people to “become stronger”.
On the other hand, perhaps you are being too insular in the communities you engage in. There are many, many groups of smart people out there in the world. Perhaps someone who got what they wanted from LW and ‘quit’ went on to gather allies who were already successful in their fields?
Thousand of small steps is required, one big epiphany is not enough. But many people expect the latter, because the very reason they seek advice is to avoid doing the former.
“Isn’t there a pill I can just take?”
X-)
The world needs more “pills I can just take.”
I don’t know about that. So far the world’s experience with “Just take this pill and everything will be fine” is… mixed.
Well, admittedly I was assuming pills that worked and had the intended effect.
Maybe some started to appreciate the struggle and the suffering, to find joy and strength in it. Then, their terminal goals pivoted.
The part of focusing your efforts on the right task is a rationality skill.
Recently one rationalist wrote on facebook how he used physical rationality to make his his shoulder heal faster after an operation and produce less pain. Having accurate models of reality is very useful in many cases.
What is “physical rationality”?
It’s a new coinage, so the term isn’t well-defined. On the other hand here are reasons to use the term.
On key aspect of “physical rationality” is a strong alignment between your own physical body and your own map of it. An absence of conflicts between system I and system II when it comes to physicality.
So I suppose things like the Alexander Technique, possibly Yoga, certain martial arts and sports might be implicated?
I don’t know all influences in this particular case but it’s certainly that direction. There was a reference to the book “A Guide to Better Movement” by Todd Hargrove.
Assuming he only had one shoulder operated on, where was the control shoulder?
His doctor was dumbfounded over the result and the doctor has seen control shoulders.
Doctors being dumfounded is a hallmark of irrationalist stories. Not saying this one is—I don’t even know the story here—but as someone who grew up around a lot of people who basically believed in magic, I can conjure so many anectotes of people thinking their doctors were blown away by sudden recoveries and miraculous healings. I mostly figure doctors go “oh cool it’s going pretty well” and add a bit of color for the patient’s benefit.
A lot of doctors will be suprised if someone walks over hot coals and afterwards has no blisters or burning marks. Yet, at Anthony Robbins seminars thousands walk over hot coals and most of them don’t develop blisters.
The human body is complex there are a lot of real phenomena that can dumfounded doctors. If you think doctors are infallible you might want to read http://lesswrong.com/r/discussion/lw/nes/link_evidencebased_medicine_has_been_hijacked/
Whether you take that as evidence that magic exists is a different matter.
If you don’t mind, what’s the name of the person who used physical rationality?
Given semi-private facebook sources, I think I’ll rather write you a direct message then answer publically.
I had an idea to write a post about this problem under the name “general effectiveness”. GE is measure of you by your outside peer, typically employer.
If I were employer I would (and I really did it as I used to hire people for small tasks in my art business) look on their general effectiveness. It constitutes of many things after rationality, including visual outlook, age, gender, interest to work, ability to come in time and their results in test work.
Most of these characteristics are unchangeable personality traits, so if a given person would invest a lot in studying rationality, he would not be able to change them much.
But he could change his place of work and find more suitable to him.
There are also ways to rise personal effectiveness in different ways. For example if I hire a helper I rise my effectiveness.
Instrumental rationality, among other things, points people to whichever of their skills or abilities is currently the performance bottleneck and encourages them to work on that, not the thing that’s most fun to work on. So we would still expect instrumental rationalists to win in this model.
(Yes, epistemic rationality might not lead to winning as directly.)
Why would that be? Is it that many people work in areas where it doesn’t really matters if they are mistaken? Or do people already know enough about the area they work in and further improvements have diminishing returns? Epistemic rationality provides a direction where people should put their efforts if they want to become less wrong about stuff. Are people simply unwilling to put in that effort?
People may underestimate the amount and kind of information they need to turn epistemic rationality into instrumental rationality.
People may underestimate the value of clearly stated and expressed and communicated preferences.
More the latter. Most of the things that a person could learn about are things that won’t help them directly. Agreed that if one has poor epistemic rationality, it’s hard to do the instrumental rationality part correctly (“I know, I’ll fix this problem by wishing!”).
Another hypothesis—the smarter you sound the less friends you tend to have.
Fewer!
Most people like having at least one smart friend.
The trick is not to make other people feel stupid, which many (most?) smart people are very bad at.
I suspect it’s more of a golden middle kind of thing—people out in both tails of the distribution tend to have social problems.
Could it also be, that being rational deprives portion of CPU/RAM of human brains, that would otherwise be used for something better?
Instrumental rationality is more or less defined as “doing whatever you need to in order to succeed”. If success requires e.g. networking, instrumental rationality would tell you to improve your networking ability.
For epistemic rationality I agree, it’s not a common bottleneck.
The question whether luck is a skill is an interesting question :-)
Probably everybody had seen it, but EY wrote long post on FB about AlphaGO which get 400 reposts. The post overestimates power of AlphaGO, and in general it seems to me that EY did too much conclusions based on very small available information (3:0 wins at the moment of the post − 10 pages of conclusions). The post’s comment section includes contribution from Robin Hanson about usual foom’s speed and type topic. EY later updated his predictions based on Segol win on game 4 and stated that even superhuman AI could make dumb mistakes, which may result in new type of AI failures.
https://www.facebook.com/yudkowsky/posts/10154018209759228?pnref=story
So, whats the difference between ‘superhuman with dumb mistakes’, ‘dumb with some superhuman skills’, and ‘better at some things and worse at others’?
I think the difference here is distribution.
superhuman with dumb mistakes’ − 4 brilliant games, one stupid loose.
dumb with some superhuman skills—dumb in one game, unbeatble in another.
better at some things and worse at others—different performance in different domains.
I think that if superhuman AI with bugs will start to self-improve, the bugs will start to accumulate. This will ruin or AIs power, or AIs goal system. The first is good and the second is bad. I also could suggest that first AI which will try to self improve will still have some bugs. The open question is if AI will be able to debug itself. Some bugs may prevent seeing them as bugs, so they are reccurent. The closest thing is human bias of overconfidence. Overconfident human can’t understand that there is something wrong with him.
History of “That which can be destroyed by the truth, should be”
First said by Hodgell, Yudkowsky wrote a variant, Sagan didn’t say it.
Ok, now Lenet run out his AI after 30 years of development https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/
Russian Compreno system, which manually model language is also suggested first service Findo (after 20 years and 80 million USD) https://abbyy.technology/en:features:linguistic:semanitc-intro
Open portion of said AI: http://www.cyc.com/platform/opencyc/
Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.
I am also very frustrated with the peer review system. The reviewers found some minor errors, and some of their other comments were helpful in the sense that they reveal which parts of the paper are most likely to be misunderstood. However, on the whole, the comments do not change my belief in the soundness of the idea, and in my view they mostly show that the reviewers simply didn’t understand what I was saying.
One comment does stand out, and I’ve spent a lot of energy today thinking about its implications: Reviewer 3 points out that my language is “too casual”. I would have had no problem accepting criticism that my language is ambiguous, imprecise, overly complicated, grammatically wrong or idiomatically weird. But too casual? What does that even mean? I have trouble interpreting the sentence to mean anything other than an allegation that I fail at a signaling game where the objective is to demonstrate impressiveness by using an artificially dense and obfuscating academic language.
From my point of view, “understanding” something means that you are able to explain it in a casual language. When I write a paper, my only objective is to allow the reader to understand what my conclusions are and how I reached them. My choice of language is optimized only for those objectives, and I fail to understand how it is even possible for it to be “too casual”.
Today, I feel very pessimistic about the state of academia and the institution of peer review. I feel stronger allegiance to the rationality movement than ever, as my ideological allies in what seems like a struggle about what it means to do science. I believe it was Tyler Cowen or Alex Tabarrok who pointed out that the true inheritors of intellectuals like Adam Smith are not people publishing in academic journals, but bloggers who write in a causal language. I can’t find the quote but today it rings more true than ever.
I understand that I am interpreting the reviewers choice of words in a way that is strongly influenced both by my disappointment in being rejected, and by my pre-existing frustration with the state of academia and peer review. I would very much appreciate if anybody could steelman the sentence “the writing is too casual”, or otherwise help me reach a less biased understanding of what just happened.
The paper is available at https://rebootingepidemiology.files.wordpress.com/2016/03/effect-measure-paper-0317162.pdf . I am willing to send a link to the reviewers’ comments by private message to anybody who is interested in seeing it.
Having glanced at your paper I think “too casual” means “your labels are too flippant”—e.g. “Doomed”. You’re showing that you’re human and that’s a big no-no for a particular kind of people...
By the way, you’re entirely too fond of using quoted words (“flip”, “transported”, “monotonicity”, “equal effects”, etc.). If the word is not exactly right so that you have to quote it, find a better word (or make a footnote, or something). Frequent word quoting is often perceived as “I was too lazy to find the proper word, here is a hint, you guess what I meant”.
Thanks. Good points. Note that many of those words are already established in the literature with same meaning. For the particular example of “doomed”, this is the standard term for this concept, and was introduced by Greenland and Robins (1986). I guess I could instead use “response type 1″ but the word doomed will be much more effective at pointing to the correct concept, particularly for people who are familiar with the previous literature.
The only new term I introduce is “flip”. I also provide a new definition of effect equality, and it therefore seems correct to use quotation marks in the new definition. Perhaps I should remove the quotation marks for everything else since I am using terms that have previously been introduced.
If my paper was rejected because it doesn’t contain enough technical terms,
I desire to believe that my paper was rejected because it doesn’t contain enough technical terms;
If my paper was not rejected because it doesn’t contain enough technical terms,
I desire to believe that my paper was not rejected because it doesn’t contain enough technical terms;
Let me not become attached to beliefs I may not want.
Didn’t read the paper, but I think a charitable explanation of “too casual” could mean (a) ambiguous, or (b) technically correct but not using the expressions standard in the field, so the reader needs a moment to understand “oh, what this paper calls X that’s probably what most of us already call Y”.
But of course, I wouldn’t dismiss the hypothesis of academically low-status language. Once at university I got a feedback about my essay that it’s “technically correct, but this is not how university-educated people are supposed to talk”.
(Okay, I skimmed through your paper, and the language seemed fine. You sound like a human, as opposed to many other papers I have seen.)
Without reading your paper, and without rejecting your hypothesis, let me propose other consequences of casual language. Experts use tools casually, but there may be pitfalls for beginners. Experts are allowed more casual language and the referee may not trust that you, personally, are an expert. That is a signaling explanation, but somewhat different. A very different explanation is that while your ultimate goal is to teach the reader your casual process, but that does not mean that recording it is the best method. Your casual language may hide the pitfalls from beginners, contributing both to their incorrect usage and to their not understanding how to choose between tools.
If your paper is aimed purely at experts, then casual language is the best means of communication. But should it be? Remember when you were a beginner. How did you learn the tools you are using? Did you learn them from papers aimed at beginners or experts; aimed at teaching tools or using them? Casual language papers can be useful for beginners as an advertisement: “Once you learn these tools, you can reason quickly and naturally, like me.”
Professors often say that they are surprised by which of their papers is most popular. In particular, they are often surprised that a paper that they thought was a routine application of a popular tool becomes popular as an exposition of that tool; often under the claim that it is a new tool. This is probably a sign that the system doesn’t generate enough exposition, but taking the system as given, it means that an important purpose of research papers is exposition, that they really are aimed at beginners as well as experts.
This is not to say that I endorse formal language. I don’t think that formal language often helps the reader over the pitfalls; that work must be reconstructed by the reader regardless of whether it the author spelled it out. But I do think that it is important to point out the dangers..
To me that sentence seems cryptic.
Do you mean probability instead of probably?
Maybe the reviewer considered
“flips”
as too casual. I think the paper might be easier to read if you either would writeflips
directly without quotes or choose another word.What the difference between
otherwise not have been a case
andnon-case
?If the reviwers don’t succeed in understanding what you are saying you might have explained yourself in casual language but still failed.
Yes. Thanks for noticing. I changed that sentence after I got the rejection letter (in order to correct a minor error that the reviewers correctly pointed out), and the error was introduced at that time. So that is not what they were referring to.
I agree, but I am puzzled by why they would have misunderstood. I spent a lot of effort over several months trying to be as clear as possible. Moreover, the ideas are very simple: The definitions are the only real innovation: Once you have the definitions, the proofs are trivial and could have been written by a high school student. If the reviewers don’t understand the basic idea, I will have to substantially update my beliefs about the quality of my writing. This is upsetting because being a bad writer will make it a lot harder to succeed in academia. The primary alternative hypotheses for why they misunderstood are either (1) that they are missing some key fundamental assumption that I take for granted or (2) that they just don’t want to understand.
What kind of audience would you expect to understand your article?
A while ago I was, for some reason, answering a few hundred questions with yes-or-no answers. I thought I would record my confidence in the answers in 5% intervals, to check my calibration. What I found was that for 60%+ confidence I am fairly well calibrated, but when I was 55% confidant I was only right 45% of the time (100)!
I think what happened is that sometimes I would think of a reason why the proposition X is true, and then think of some reasons why X is false, only I would now be anchored onto my original assessment that X is true. So instead of changing my mind to ‘X is false’ I would only decrease my confidence.
I.e. my thought processes looked like this
reason why X is true → X is true, 60% confidence → reasons why X is false → X is true, 55% confidence
When it should be:
reason why X is true → X is true, 60% confidence → reasons why X is false → CHANGE OPINION → X is false, 55% confidence
Did you write the questions or were they presented to you? If they were presented to you, then you have no choice in which of the two answers is “yes” and which is “no.” So it is meaningful for you distinguish between the questions for which you answered 55% and the questions for which you answered 45%. Did you find a symmetrical effect?
It was symmetric. I never answered 45% - to clarify, when I answered 55% I was right 45% of the time. And I only recorded whether I was right or wrong, not whether I was right about X being false.
The vast majority of yes/no questions you’re likely to face won’t support 5% intervals. You’re just not going to get enough data to have any idea whether the “true” calibration is what actually happens for that small selection of questions.
That said, I agree there’s an analytic flaw if you can change true to false on no additional data (kind of: you noticed salience of something you’d previously ignored, which may count as evidence depending on how you arrived at your prior) and only reduce confidence a tiny amount.
One suggestion that may help: don’t separate your answer from your confidence confidence, just calculate a probability. Not “true, 60% confidence” (implying 40% unknown, I think, not 40% false), but “80% likely to be true”. It really makes updates easier to calculate and understand.
Tetlock found in the Good Judgement Project as described in his book Superforcasting that people who are excellent at forcasting do very finely grained predictions.
I disagree that you can’t get 5% intervals on random yes/no questions—if you stick with 10%, you really only have 5 possible values − 50-59%, 60-69%, 70-79%, 80-89%, and 90+%. That’s very coarse-grained.
I agree [edit: actually, it depends on where these yes/no questions are coming from] , but think the questions I was looking at were in the small minority that do support 5% intervals.
Perhaps I should have provided more details to explain exactly what I did, because I actually did mean 60% true 40% false.
So, I already was thinking in the manner you advocate, but thanks for the advice anyway!
In The genie knows, but it doesn’t care, RobbBB argues that even if an AI is intelligent enough to understand its creator’s wishes in perfect detail, that doesn’t mean that its creator’s wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stymie evolution’s “wishes”.
I thought of a potential way to get around this issue:
Create a tool AI.
Use the tool AI as a tool to improve itself, similar to the way I might use my new text editor to edit my new text editor’s code.
Use the tool AI to build an incredibly rich world-model, which includes, among other things, an incredibly rich model of what it means to be Friendly.
Use the tool AI to build tools for browsing this incredibly rich world-model and getting explanations about what various items in the ontology correspond to.
Browse this incredibly rich world-model. Find the item in the ontology that corresponds to universal flourishing and tell the tool AI “convert yourself in to an agent and work on this”.
There’s a lot hanging on the “tool AI/agent AI” distinction in this narrative. So before actually working on this plan, one would want to think hard about the meaning of this distinction. What if the tool AI inadvertently self-modifies & becomes “enough of an agent” to deceive its operator?
The tool vs agent distinction probably has something to do with (a) the degree to which the thing acts autonomously and (b) the degree to which its human operator stays in the loop. A vacuum is a tool: I’m not going to vacuum over my prized rug and rip it up. A Roomba is more of an agent: if I let it run while I am out of the house, it’s possible that it will rip up my prized rug as it autonomously moves about the house. But if I stay home and glance over at my Roomba every so often, it’s possible that I’ll notice that my rug is about to get shredded and turn off my Roomba first. I could also be kept in the loop if the thing gives me warnings about undesirable outcomes I might not want: for example, my Roomba could scan the house before it ran, giving me an inventory of all the items it might come in contact with.
An interesting proposition I’m tempted to argue for is the “autonomy orthogonality thesis”. The original “orthogonality thesis” says that how intelligent an agent is and what values it has are, in principle, orthogonal. The autonomy orthogonality thesis says that how intelligent an agent is and the degree to which it has autonomy and can be described as an “agent” are also, in principle, orthogonal. My pocket calculator is vastly more intelligent than I am at doing arithmetic, but it’s still vastly less autonomous than me. Google Search can instantly answer questions it would take me a lifetime to answer working independently, but Google Search is in no danger of “waking up” and displaying autonomy. So the question here is whether you could create something like Google Search that has the capacity for general intelligence while lacking autonomy.
I feel like the “autonomy orthogonality thesis” might be a good steelman of a lot of mainstream AI researchers who blow raspberries in the general direction of people concerned with AI safety. The thought is that if AI researchers have programmed something in detail to do one particular thing, it’s not about to “wake up” and start acting autonomous.
Another thought: One might argue that if a Tool AI starts modifying itself in to a superintelligence, the result will be too complicated for humans to ever verify. But there’s an interesting contradiction here. A key disagreement in the Hanson/Yudkowsky AI-foom debate was the existence of important, undiscovered chunky insights about intelligence. Either these insights exist or they don’t. If they do, then the amount of code one needs to write in order to create a superintelligence is relatively little, and it should be possible for humans to independently verify the superintelligence’s code. If they don’t, then we are more likely going to have a soft takeoff anyway because intelligence is about building lots of heterogenous structures and getting lots of little things right, and that takes time.
Another thought: maybe it’s valuable to try to advance natural language processing, differentially speaking, so AIs can better understand human concepts by reading about them?
An interesting idea, but I can still imagine it failing in a few ways:
the AI kills you during the process of building the “incredibly rich world-model”, for example because using the atoms of your body will help it achieve a better model;
the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.
OK, I think this is a helpful objection because it helps me further define the “tool”/”agent” distinction. In my mind, an “agent” works towards goals in a freeform way, whereas a “tool” executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it’s executing a defined process to retrieve search results.
A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I’m able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn’t ask for confirmation before taking an action I might not want it to take.
Based on this logic, the best prospects for “tool AIs” may be “speed superintelligences”/”collective superintelligences”—AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.
You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.
Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn’t any funny shit going on.
Sure, and there’s also the “superintelligent, but with bugs” failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.
I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex—in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.
And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.
My general sense is that with enough safeguards and checks, this “tool AI bootstrapping process” could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module… For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement… Etc.
I am trying to imagine the weakest dangerous Google Search successor.
Probably this: Imagine that the search engine is able to model you. Adding such ability would make sense commercially, if the producers want to make sure that the customers are satisfied with their product. Let’s assume that the computing power is too cheap and they added too much of this ability. Now the search engine could e.g. find a result with highest rank, but then predict that seeing this result would make you disapointed, so it chooses another result instead, with somewhat lower rank, but with high predicted satisfaction. For the producers this may seem like a desired ability (tailored, personally relevant search results).
As an undesired side-effect, the search engine would de facto gain an ability to lie to you, convincingly. For example, let’s say that the function for measuring customer satisfaction only includes emotional reaction, and doesn’t include things like “a desire to know truth, even if it’s unpleasant”. That could happen for various reason, such as the producers not giving a fuck about our abstract desires, or concluding that abstract desires are mostly a hypocrisy but emotions are honest. Now as a side-effect, instead of unpleasant truth, the search engine would return a comfortable lie, if available. (Because the answer which makes the customer most happy is selected.)
Perhaps people would become aware of this, and would always double-check the answers. But suppose that the search engine is insanely good at modelling you, so it can also predict how specifically are you going to verify the questions, and whether you succeed or fail to find the truth. Now we get the more scary version which lies to you if and only if you are unable to find out that it lied. Thus to you, the search engine will seem completely trustworthy. All answers you have ever received, if you verified them, you learned that they were correct. You are only surprised to see that the search engine sometimes delivers wrong answers to other people; but in such situations you are always unable to convince the other people that those answers were wrong, because the answers are perfectly aligned with their existing beliefs. You could be smart enough to use an outside view to suspect that maybe something similar is happening to you, too. Or you may conclude that the other people are simply idiots.
Let’s imagine even more powerful search engine, and more clever designers, who instead of individual satisfaction with search results try to optimize for general satisfaction with their product in the population as a whole. As a side effect of this, now the search engine would only lie in ways that make society as a whole more happy with the results, and where the society as a whole is unable to find out what is happening. So for example, you could notice that the search engine is spreading a false information, but you would not be able to convince a majority of other people about it (because if the search engine would predict that you could, it would not have displayed the information at the first place).
Why could this be dangerous? A few “noble lies” here and there, what’s the worst thing that could happen? Imagine that the working definition of “satisfaction” is somewhat simplistic and does not include all human values. And imagine an insanely powerful search engine that could predict the results of its manipulation centuries ahead. Such engine could gently push the whole humanity towards some undesired attractor, such as a future where all people are wireheaded (from the point of view of the search engine: customers are maximally satisfied with the outcome), or just brainwashed in a cultish society which supports the search engine because the search engine never contradicts the cult teaching. That pushing would be achieved by giving higher visibility to pages supporting the idea (especially if the idea would seem appealing to the reader), lower visibility to pages explaining the dangers of the idea; and also on more meta levels, e.g. giving higher visibility to pages explaining personal scandals related to the people prominently explaining the dangers of the idea, etc.
Okay, this is stretching the credibility at a few places, but I tried to find a hypothetical scenario where a too powerful but still completely transparently designed Google Search successor would doom humanity.
OK, I think this is a helpful objection because it helps me further define the “tool”/”agent” distinction. In my mind, an “agent” works towards goals in a freeform way, whereas a “tool” executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it’s executing a defined process to retrieve search results.
A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I’m able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn’t ask for confirmation before taking an action I might not want it to take.
Based on this logic, the best prospects for “tool AIs” may be “speed superintelligences”/”collective superintelligences”—AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.
You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.
Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn’t any funny shit going on.
Sure, and there’s also the “superintelligent, but with bugs” failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.
I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex—in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.
And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.
My general sense is that with enough safeguards and checks, this “tool AI bootstrapping process” could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module… For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement… Etc.
I will clip your idea and add it to the map of the ways of AI control ideas.
Evolution doesn’t have “wishes”. It’s not a teleological entity.
The recently posted Intelligence Squared video titled Don’t Trust the Promise of Artificial Intelligence may be of interest to LW readers, if only because of IQ2′s decently sized cultural reach and audience.
Replication crisis: does anyone know of a list of solid, replicated findings in the social sciences? (all I know is that there were 36 in the report by Open Science Collaboration, and those are the ones I can easily find)
What are the 36 solid, replicated findings?
https://osf.io/hy58n/
There is the data. I am not sure what was the final criterion for the report, but sorting by P.value.R seems to have 33 findings with p under 0.05. (Maybe I misremembered the number?… also, I am unsure for what a p value of 0 is supposed to mean.) I didn’t go too deep into what all the different columns represent, but there seems to be one with descriptions of the findings.
Telling truth to any face -
Not a lie, with mortar hoary -
Go apace to any place,
To attend to any story.
Happy belated Pi Day, everyone!
Happy sqrt(10) day!
I want to make a desktop map application of my city, kinda like Paradox Interactive’s games. My city is 280 km^2, and I would like it at a street level detail. I want to be able to just overlay multiple layers of different maps. What I have in mind is displaying predicted tram locations, purchasing power maps, and pretty much any information I can find on one map, and combining these at will, with a reasonable speed (and I would much prefer it to be seamless, like in a game, and not displaying white spots at the edges while it is loading)
Does anyone know of some toolset for such?
Autocad Map 3D is also something you want to look into, as it’s used exactly for this purpose (I almost do this as a job). For speed though, you need quite a capable machine.
OpenStreetMap provides data that can be used more widely than the Google data.
Google Maps (which, I think Google Earth was folded into, but in case it wasn’t you actually want Google Earth).
Alternatively, if you want your own app, look into Open Street Map and their tools.
Do you have a background in formal debate?
[pollid:1129]
If you do, do you think it was worth the time?
[pollid:1130]
If you don’t, do you regret not having it?
[pollid:1131]
not many yeses. makes it hard to find out what you wanted to find out.
I was just driven by vague curiosity from a discussion elsewhere about what traits might correlate with rationality.
The lack of debate background suggests (weakly because of small sample size) that debating doesn’t correlate with rationality.
Maybe I’ll figure out a good way to ask about the desire to argue, which I think does correlate with at least LW rationality.
Maybe most LWers went to schools where no debate programs were available?
I’ve always enjoyed Kurzweil’s story about how the human genome project was “almost done” when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).
It makes me wonder if we’re “almost done” with FAI.
I don’t really know where we are with FAI. I don’t know if our progress is even knowable, since we don’t really know where we’re going. There’s certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might suddenly become very helpful.
Douglas Lenat’s Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it. I’m actually blown away and a little alarmed by the things it can apparently do now. IBM’s Watson is another machine that can interpret and answer complex queries and demonstrates real semantic awareness. These two systems alone indicate that the state of the art in what you might call “superhuman-knowledge-plus-superhuman-logical-deduction” is ripe (or almost ripe) for exploitation by human FAI researchers. (You could also call these systems “Weak Oracles” or something.)
Nobody expects Cyc or Watson to FOOM in the next few years, but other near-future Weak Oracles might still greatly accelerate our progress in exploring, developing and formalizing the technology needed to solve the Control Problem. It intuitively feels like Weak Oracle tech might actually enable the sort of rapid doubling in progress that we’ve observed in other domains.
The AlphaGo victory has made me realize that the quality of the future really hinges on which of several competing exponential trends happens to have the sharpest coefficient. Specifically, will we get really-strong-but-not-generally-intelligent Weak Oracles before we get GAI? Where is the crossover of those two curves?
Can you provide a link to the powerful demonstrations of Cyc?
Lenat’s Google Talk has a lot of examples.
Among them would be giving Cyc a large amount of text and/or images to assimilate and then asking it questions like:
Query: “Government buildings damaged in terrorist events in Beirut between 1990 and 2001.” A moment’s thought will reveal how complex this query actually is, and how many ways there are to answer it incorrectly, but Cyc gives the right answer.
Query: “Pictures of strong and adventurous people.” Returns a picture of a man climbing a rock face, since it knows that rock climbing requires strength and an adventurous disposition.
Query: “What major US cities are particularly vulnerable to an anthrax attack?” This is my favorite example, because it needs to assess not only what “major US cities” are but also what the ideal conditions for the spread of anthrax are and then apply that as a filter over those cities with nuanced contextual awareness.
In general Cyc impresses me because it doesn’t use of any kind of neural network architecture, it’s just knowledge linked in explicit ontologies with a reasoning engine.
It’s good at marketing.
Modest proposal for Friendly AI research:
Create a moral framework that incentivizes assholes to cooperate.
Specifically, create a set of laws for a “community”, with the laws applying only to members, that would attract finance guys, successful “unicorn” startup owners, politicians, drug dealers at the “regional manager” level, and other assholes.
Win condition: a “trust app” that everyone uses, that tells users how trustworthy every single person they meet is.
Lose condition: startup fund assholes end up with majority ownership of the first smarter-than-human-level general AI, and no one’s given smart people an incentive not to hurt dumb people.
If you can’t incentivize smart selfish people to “cooperate” instead of “defect”, then why do you think you can incentivize an AI to be friendly? What’s to stop a troll from deleting the “Friendly” part the second the AI source code hits the Internet? Keep in mind that the 4chan community has a similar ethos to LW: namely “anything that can be destroyed by a basement dweller should be”.
So, capitalism?
That seems like a horrible idea.
We can, of course, just not unconditionally and not all the time. Creatures which always cooperate are social insects.
Unrelated to AI:
Making the “trust app” would be a great thing. I spent some time thinking about it, but my sad conclusion is that as soon as the app would become popular, it would fail somehow. For example, if it is not anonymous, people could use real-world pressures to force people to give them positive ratings. The psychopaths would threaten to sue people who label them as psychopaths, or even use violence directly against them. On the other hand, if the ratings are anonymous, a charming psychopath could sic their followers to give many negative ratings to their enemy. At the end, the ratings of a psychopath who hurt many people could look pretty similar to ratings of a decent person who pissed off a vengeful psychopath.
Not sure what to do here. Maybe the usage itself of the “trust app” should be an information you only tell your trusted friends; and maybe create different personas for each group of friends. But then the whole network becomes sparse, so you will not be able to get information on most people you will care about. Also, there is still a risk that if the app becomes popular, there will be a social pressure to create an official persona, which will be further pressured to give socially acceptable ratings. (Your friends will still know your secret persona, but because of the sparse network, it will be mostly useless to them anyway.)
A trust app is going to end up with all the same issues credit ratings have.
Looking for advice with something it seems LW can help with.
I’m currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I’m sorry to be vague, but I can’t actually say more than that.
As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?
Naturally, I’m of the opinion that rationalist thought has a lot to offer on all of those questions. (I also have ulterior motives here, because I think it would be really cool to get some of these people on board with rationalism in general.) I’m having a hard time narrowing down this idea to a lesson plan I can submit to the organizers, so I thought I’d ask for suggestions.
The possible formats I have open for an activity are a lecture, a workshop/discussion in small groups, and some sort of guided introspection/reading activity (for example just giving people a sheet with questions to ponder on it, or a text to reflect on).
I’ve also come up with several possible topics: How to Actually Change Your Mind (ideas on how to go about condensing it are welcome), practical mind-hacking techniques and/or techniques for self-transparency, or just information on heuristics and biases because I think that’s useful in general.
You can also assume the intended audience already know each other pretty well, and are capable of rather more analysis and actual math than is average.
Ideas for topics or activities, particularly ones that include a strong affective experience because those are generally better at getting poeple to think about this sort of thing for the first time, are welcome.
Idea that might or might be relevant depending on how smart / advanced your group is.
You could introduce some advanced statistical methods and use it to derive results from everyday life, a la Bayes and mammography.
If you can show some interesting or counter intuitive results (that you can’t obtain with intuition) it would give the affective experience you want, and if they want to do scientific research, the more they know about statistics the better.
Statistics are also a good entry door for rationalist thinking.
A few random thoughts:
A system composed of atoms. (As opposed to a magical immaterial being who merely happens to be trapped in a material body, but can easily overcome all its limitations by sufficient belief / mysterious willpower / positive thinking.)
That means I should pay some attention to me as a causal system; to try seeing myself as an outside observer would. For example, instead of telling myself that I should be e.g. “productive”, I should rather look into my past and see what kinds of circumstances have historically made me more “productive”; and then try to replicate those more reliably. To pay attention to the trivial inconveniences, superstimuli, peer pressure—simply to be humble enough to admit that in short term I may be less of the source of my actions than I would like to believe, and that the proper way to fix it is to be strategic in long term, which is not going to happen automatically.
Most people value happiness. But the human value is complex; we also want our beliefs to correspond to reality instead of merely believing pretty lies or getting good feelings from drugs.
Often people are bad at predicting what would make them happy. There is often a difference between how something feels when we plan it, when we are living it, and when we remember the thing afterwards. For example, people planning vacation can overestimate how good the vacation will be, and they may underestimate the little joys of everyday life. Or a difficult experience may improve relationships between people who suffered together, and make a good story afterwards, thus creating a lot of value in long term despite being shitty at the moment.
Sometimes we have goals, or we tell ourselves that something will be awesome, under influence of other people. We should make sure those people are in our “reference group”, and that they are speaking from their experience instead of merely repeating popular beliefs (in best case, those people should be older versions of our better selves).
Success often does not feel magical at the moment it happens; and it never makes you “happy ever after”. For example, you may believe that if you achieve X, you will be super happy, but actually when the day comes, you will probably feel tired, or maybe even a bit disappointed. You may have already raised your expectations, so at the day you are reaching X you already believe that only 2X can make you truly happy. Or maybe X comes so gradually that you never actually notice it when it comes, because that day doesn’t feel much different from the previous one. -- This can be solved by reviewing the past and finding the values of X that you have already achieved, and that you remember having wanted once.
If your strategy is “to do X because you want to achieve Y”, you should look for evidence whether X actually brings Y, and whether there are alternative ways to achieve Y. Otherwise you risk spending a lot of time and energy to achieve X without actually achieving Y.
Specific goals need specific answers. But in general, you probably need to have a good model of how other people achieve similar goals (the problem is, many people will lie to you for various reasons). Then you need vision and habits. And some system of feedback, to measure whether you really progress in long term.
For example, if your goal is to write a novel, you should look for advice from your favorite authors, you should imagine what kind of novel you want to write for which audience, and they you need to spend some time every week actually writing. You could measure your long-term progress e.g. by publishing your writing on web, and measuring how many people read it.
“How to Actually Change Your Mind” is a great topic. I good way to start such a workshop is by having everybody write down instances where they changed their mind in the last year and then discuss those examples.
Do you guys know how you can prevent sleep paralysis?
What makes it a problem for you? What’s the problem of having a bit more conscious time while your body is at rest?
Have you tried the normal sleep hacks of going every day at the same time to bed and sleeping 8 hours, having no red light an hour before bed, sleeping in a pitch black room and taking a bit Melatonin?
It’s an incredibly good indicator of poor sleep quality for me. I have to take phenibut to get good sleep quality nowadays though.
Yes I have. I notice it has to do with body position or when my head is on a tilt.
I’ve found that I only ever get something sort of like sleep paralysis when I sleep flat on my back, so +1 for sleeping orientation mattering for some reason.
For me recurrent sleep paralysis turned out to be associated with sleep apnea. Both were reduced but not eliminated by adjusting sleep position (side rather than back as others have already mentioned), wearing a mandibular adjustment device (holds the jaw in a slightly different position to avoid airway obstruction). Similarly, some changes in consumption habits reduced occurrence: reducing alcohol intake and large/rich meals shortly before sleeping.
in my case these symptoms were the result of some abnormalities in my throat cartilage which eventually required surgery, but the above behaviour changes reduced occurrence substantially (approx 5 instances per week of sleep paralysis or choking to 1.2 based on 3-month diary). I made all the above adjustments together so cannot give any further indications about which of them might have helped. Or indeed, fully ruled out a placebo effect!
I didn’t recognise the association between sleep paralysis and apnea but it was one of the first things the head & neck specialist asked.
I did not have sleep apnea or tested negative for it and narcolepsy.
Putting a bar of soap between bedsheets supposedly prevents leg cramps. You might want to try it for sleep paralysis keeping in mind that the placebo effect is a real thing you want to take advantage of.
The example of soap suggests that our beds that are completely flat aren’t optimal sleeping surfaces. It would be interesting what a bed that automatically adjusts it’s surface can do if it’s smart.
Start to use for experiments with OBE or visualization.
Does it make a difference if an organism reproduces in multiple smaller populations versus one larger, if the number of offspring at generation one is held constant? (score is determined by the number of offspring and their relatedness, so the standard game)
Smaller populations are more prone to genetic drift, but in both directions, right?
Does this change somehow if the populations are connected, with different rates of flow depending on the direction?
For example, in humans, migration to the capitals (and in general, urbanization) happens way more often than the converse. I also believe that people are unlikely to migrate between like-sized cities, cause what’s the point, but that’s just an assumption. In this case, for genes to spread from one small population to another, they have to go through the capital first. OTOH, the populations leaving source small population could be more related to the original.
So, uh, in general, how would one find the optimal strategy here? …is there a difference?
I have a rationalist/rationalist-adjacent friend who would love a book recommendation on how to be good at dating and relationships. Their specific scenario is that they already have a stable relationship, but they’re relatively new to having relationships in general, and are looking for lots of general advice.
Since the sanity waterline here is pretty high, I though I’d ask if anyone had any recommendations or not. If not, I’ll just point them to this LW post, though having a bit more material to read through might suit them well.
Thanks!
It’s written from a Christian perspective, but “Things I wish I’d known before getting married” by Gary Chapman is extremely good: 90% level-headed good sense and 10% Christian moralizing. I recommend it for any new couple.
Rowland Miller, Intimate Relationships, 7th Ed.
A good book for a general overview is Mate: Become the Man Women Want by Tucker Max and Geoffrey Miller. Geoffrey Miller is an evolutionary psychology professor and Tucker Max is famous for writing books about his politically incorrect sex stories. At the same time the focus of their book is on mutual benefitial interactions. They also have a podacst over at http://thematinggrounds.com/
From the position of being new to realtionships it’s also worthwhile to read about sex. Two good books are the Sex God Method by Daniel Rose and Slow Sex by Nicole Daedone. Both books provide very different perspectives. Daniel Rose comes from a PUA background. Nicole Daedone has a degree in Gender Communications and a more New Age background.
(My brain automatically added “in-” before “famous” when I skimmed that sentence.)
I like John Gottman’s books. He has written several, any would be good. My favourite is “And Baby Makes Three.” He is a therapist who studies married couples in a lab, and can see what works and what doesn’t.
Isn’t some sort of deism at least plausible and reasonable at this juncture? Is there a materialistic theory of what happened before the big bang that is worth putting any stock in? Or are we in an agnostic wait-and-see mode regarding pre-big bang events?
That would majorly depend on what “deism” means, as a concrete model, other than “here my other models break down”. After all, if you can postulate an intelligent and moral being, with exactly our own kind of intelligence and morality, with the power of creating a universe, then surely you can posit, with much more confidence, an unintelligent and amoral system with the power of creating a universe.
There are many, but none of them are in the realm of testability due to the dependency to a flavor of quantum gravity. Let’s not forget that the Big Bang is a singularity, meaning a point where the model breaks down and cry. If you want to go ‘before’ the Big Bang, you need a wider model (that is, a theory of quantum gravity).
That is surely the most sensible approach at our point in time.
Time is a phenomenon inside the physical world, it is not something outside of it. It doesn’t make sense to take about time before the the existence of physical world.
Yeah. Okay. Is there any consensus about what caused the big bang? Like, how it happened?
It seems to me abiogenesis is super tricky but conceivable. The “beginning” of everything is a bit more conceptually problematic.
Positing a hyper-powerful creative entity seems not that epistemologically reckless when the more “scientific” alternative is “something happened”.
Jumping from “something happened” to “a hyper-powerful creative entity happened” is not reckless? Especially when we have evidence that more complex things can arise from less complex things without a supernatural manager guiding the process.
What makes you look at the vast set of “somethings” that might have been responsible for the origin of the universe, and choose exactly the same thing that our ancestors considered a good explanation for the origins of thunder (and now we know they were wrong)?
This isn’t being questioned. I’m asking about origins.
I don’t consider it a good explanation. But others have. And I don’t see why it’s necessarily bad. So far, I’ve seen no reason on this thread to update and make deism an awful explanation.
How about epistemicologically useless? What caused your hyper-powerful creative entity? You haven’t accomplished anything, you’ve just added another black box to your collection.
It is a progress from “here is a black box and I don’t know what is inside” to “here is a black box and I believe there is a magical fairy inside”.
I suppose. Though I think saying “magical fairy” is just an attempt to use silly-sounding words to dismiss an idea.
I may be wrong (IF SO, PLEASE CORRECT ME WITH DETAILS), but from what I understand, the origin of the universe (“pre-big bang”, to the extent that phrase makes any sense) is an area where we currently have almost no knowledge. There are lots of very strange theories and concepts being discussed that have no real evidence supporting them. We’re often dealing with pure conjecture, speculating about the way things might be in the absence of the universal laws with which we are familiar.
Do you have a particular theory about how the universe came to be? If so, what makes you believe this?
I agree that the non-religious theories about origins of the universe are speculative. I could name a few, and perhaps say which ones I prefer, but I wouldn’t expect to convince anyone, probably not even myself on a different day.
(I suspect the correct answer is somewhere along: “everything exists in a timeless Tegmark multiverse, but intelligent observers only happen in situations where causality exists, and causality defines some kind of measure, so despite everything existing, some things seem more likely to the observers than other things”. And specifically for the origin of our universe, I suspect the correct answer would be that if you get too close to the big bang, local arrows of time start pointing in non-parallel directions and/or the past stops being unique. But that’s just a bunch of words masking my lack of deep understanding.)
However, religions also don’t have convincing answers for what happened before god(s) created the world, or how did god(s) happen to exist. So by adding religion you are actually not getting any closer to the answer. You have one more step in the chain, but the end of the new chain looks the same (or worse) as the end of the old one.
Instead of “universe has simply existed since ever” you have “god has simply existed since ever”; instead of “time only exists within universe, so it is meaningless to ask what was before that” you have “god has created time together with universe, so it is meaningless to ask what was before that”; instead of “the universes exist in an infinite loop of big bang and big crunch” you have “god keeps creating and destroying universes in an infinite loop”; et cetera.
Can you explain how a simulated universe, for instance, is more useful than deism? Doesn’t it also simply move the question of ultimate origins back a step?
Right, which is why I don’t postulate a simulated universe as the explanation for existence.
This is essentially what username2 was getting at, but I’ll try a different direction.
It’s entirely possible that “what caused the big bang” is a nonsensical question. ‘Causes’ and ‘Effects’ only exist insofar as there are things which exist to cause causes and effect effects. The “cause and effect” apparatus could be entirely contained within the universe, in the same way that it’s not really sensible to talk about “before” the universe.
Alternatively, it could be that there’s no “before” because the universe has always existed. Or that our universe nucleated from another universe, and that one could follow the causal chain of universes nucleating within universe backwards forever. Or that time is circular.
I suspect that the reason I’m not religious is that I’m not at all bothered by the question “Why is there a universe, rather than not a universe?” not having a meaningful answer. Or rather, it feels overwhelmingly anthropocentric to expect that the answer to that question, if there even was one, would be comprehensible to me. Worse, if the answer really was “God did it,” I think I would just be disappointed.
It makes a lot of sense that the nature of questions regarding the “beginning” of the universe is nonsensical and anthropocentric, but it still feels like a cheap response that misses the crux of the issue. It feels like “science will fill in that gap eventually” and we ought to trust that will be so.
Matter exists. And there are physical laws in the universe that exist. I accept, despite my lack of imagination and fancy scientific book learning, that this is basically enough to deterministically allow intelligent live beings like you and I to be corresponding via our internet-ed magical picture boxes. Given enough time, just gravity and matter gets us to here—to all the apparent complexity of the universe. I buy that.
But whether the universe is eternal, or time is circular, or we came from another universe, or we are in a simulation, or whatever other strange non-intuitive thing may be true in regard to the ultimate origins of everything, there is still this pesky fact that we are here. And everything else is here. There is existence where it certainly seems there just as easily could be non-existence.
Again, I really do recognize the silly anthropocentric nature of questions about matters like these. I think you are ultimately right that the questions are non-sensical.
But, to my original question, it seems a simple agnostic-ish deism is a fairly reasonable position given the infantile state of our current understanding of ultimate origins. I mean, if you’re correct, we don’t even know that we are asking questions that make sense about how things exist...then how can we rule out something like a powerful, intelligent creative entity (that has nothing to with any revealed religion)?
I’m not asking rhetorically. How do you rule it out?
My disagreement isn’t that it’s implausible for such an entity to exist, but that it’s extremely implausible for it to matter in any decision or experience I anticipate. The chain of unsupported leaps from “I perceive all this stuff and I don’t know why” to “some powerful entitiy created it all, and I understand their desires and want to behave in ways that please or manipulate them” is more than I can follow.
The OP is talking about deism.
Right. And regardless of what’s written about the rest of the cluster of religious belief regarding souls, creator-pleasing morality, etc., I have yet to actually meet anyone who assigns a high probability of a conscious thinking Creator without also bringing the rest of it in.
Have you met anyone who self-identifies as a deist?
Why is the bold necessary, or necessarily relevant? Are you referencing revealed religion?
There’s the burden of proof thing (it’s the affirmer, not the denier, who has to present evidence) and the null hypothesis thing (in absence of evidence, the no-effect or no-relationship hypothesis stands).
I’m not trying to prove anything. I’m asking specifically about the process people who are smarter than I use to rule a proposition out.
I think that’s one question that science probably won’t be able to answer. But that’s no reason to just make something up! Maybe we can’t rule out a ‘powerful, intelligent creative entity’ – but why would you even think of that? And of course it just shifts the question to the next level, because where would that entity come from?
Others have thought of it. I’m asking why I ought to dismiss it. I think we have good reasons to dismiss, for instance, Christianity, because of the positive claims it makes. I don’t see the same contradiction with something like deism.
This isn’t a compelling argument to me. Can we rule out an intelligent prime mover with what we know about the universe? If so, what do we call the events that first caused everything to be?
Could the prime numbers not exist? Somethings, such as our universe, might have to exist.
Please elaborate. The universe is necessary?
I have thought a lot about why there is something rather than nothing. It seems (to my brain at least) that the prime numbers have to exist, that they are necessary. I have speculated that perhaps after we understand all of physics we will come to realize that like the prime numbers, the universe must exist. I admit that I’m giving a mysterious answer to a mysterious question, sorry.
Interesting. I’m ignorant of math, but aren’t numbers just abstractions? And prime numbers exist within those abstractions?
Can you help me understand the parallel to the physical reality, and ultimate origins, of the universe?
...
I appreciate your reply, as it pretty well sums up where I’m at. Can you take a stab at articulating why you (presumably) reject something like deism as an explanation for why there is something instead of nothing?
I also believe a perfect knowledge of physics will ultimately allow us to see clearly “why” and “how” the universe is the way it is, solving questions of origin in the process. But, in the meantime, I’m having a hard time dismissing the idea of a powerful intelligent creative entity a la deism, as it seems just as plausible as the other ideas I’m aware of.
On other note: It seems deism gets saddled with connotations of religion in discussions like this, and I don’t think this is fair or helpful in the discussion. If you would be intentional to avoid this in your response, I would appreciate it.
Look into the ideas of Tegmark, the Mathematical Universe Hypothesis. The central idea is that all possible mathematical structures exist. What we view as “the Universe” is just one set of equations with a particular set of boundary conditions, out of an infinite space of valid mathematical structures. The Universe exists because its existence is logically valid. That’s it.
Yes, this is my best guess as well. I reject deism because of Occam’s razor—the computational complexity of a conscious creator is rather high, although I think this might all be a computer simulation, although then the basement reality doesn’t have a conscious creator.
Private insurance approaches to universal healthcare seem like the only universal healthcare policy formulation that doesn’t subsidise and therefore incentivise poor health decisions. Therefore, the primacy of my justice ethics would support that or non-universal health care policy formulations. However, I don’t know if evidence supports or opposes the execution of that perverse incentive in actual human behaviour and whether complex other factors (e.g increased productivity of the subsidised risk takers?) sufficiently compensates individuals who are making legitimate, egoistic decisions or better (prosocial). Anyone know what the evidence says, preferably with an indication of the strength of evidence so others evidence can be synthesised appropriately?
How do I work out whether an ethical duck farm is a profitable venture?
Say this with me ‘I will cognitively reframe and restructure the knowledge of antecedents and determinants of negative, inadvertable consequences because cognitive behavioural therapy actually works.’
I’m interested in things people might expect or seek reactions after disclosing information or asking a question. What kind of reaction are you expecting in response to whatever you comment to this reply?
Do you believe the affective fallacy is a legit fallacy? I don’t, but I think attitudes to the fallacy would be a good correlate of attitudes to my writing.
Strategically, do you think more like a naval admiral, or a pirate captain?
By taxing tobacco above the Ramsey rate up to Pigovian rates it is sacrificing government tax revenue (by cannibalising the elasticity of demand) for public health gains that can be gained in other ways (eg by tobacco licenses), but that wouldn’t that reduce demand too due to less consumption. That’s because even though tobacco tax revenue is high it doesn’t match the costs externalised on the health care system.
Counterintuitive relevant fact: Adam Smith support Pigovian taxes
Adam Smith. An Inquiry into the Nature and Causes of the Wealth of Nations, 1776i
Got the travel bug? Want a cure? Check this out
I recently almost asked someone if they had a strategy....for what amounted to the formulation fo their startup’s strategy, meta addiction diagnosed!
Maybe it’s because I’m compulsive. Maybe it’s because I’m clingy to motivational videos, maybe it’s because I’m a gambling addict. So, I’ve got off the hedonic treadmill. How? A mindblowing attitude adjustment on desire. This culls my impulsivity and reactiveness. Thank you Julien Blanc. A supplement I’m too lazy to watch usually myself here. I wouldn’t be suprised if people look back at PUAs with the admiration afforded to social movements with the benefits of historic hindsight, who are reviled or ignored in their living prime.
Want to see the world’s most competed, battle tested diplomat in action? Here’s a video of him in an interview here: Interview: Dr Mohammad Javad Zarif, Iran’s Foreign Minister with ABC’s Chief Foreign Correspondent Philip Williams. He’s the consequence of being a career diplomat AND an academic THEN a politician.
The standard of evidence for cocoa butter’s efficacy on Wikipeida is citing Livestrong articles. They’re natural, so are an attractive lip balm option, but do they work?
Here’s an idea that’s before it’s time: A nightclub that plays music at a level that won’t cause permanent hearing damage...could go hand in hand with sober nightclubs …maybe even silent discos for talking friendly, individual tailoring and no noise pollution
No
It’s a mouthful
If it wasn’t, it would suffer from one of Empson’s 7 Types of Ambiguity. Now that I have a typology of ambiguity, I no longer feel uncomfortable by it.
One major difference between left and right is the stance on personal responsibility.
Leftist intellectuals (tends to) think society influence trumps individual capabilities, so people are not responsible for their misfortunes and deserve to be helped. Whereas Rightist have the opposite view (related).
This seems trivial, especially in hindsight. But I hardly ever see it mentioned and in most discussions the right side treat the left as foolish and irrational and the left thinks right people are self-interested and evil rather than simply having a different philosophical opinion.
I guess this is part of the bigger picture on political discourse, it is always easier to dehumanise an opponent than to admit is point is as valid as ours.
Would this description pass an ideological Turing test?
It seems to me (leftish) that it’s pointing at something correct but oversimplifying.
In so far as Lycce’s analysis is correct, I should be looking at people in difficulty and saying “there’s nothing wrong with their abilities, but society has screwed them over, and for that reason they should be helped”. I might say that sometimes—e.g., when looking at a case of alleged sexual discrimination—but in that case my disagreement with those who take the other position isn’t philosophical, it’s a matter of empirical fact. (Unless either side takes that position without regard to the evidence in any given case, which I don’t think I do and wouldn’t expect the more reasonable sort of rightist to do either.)
But it’s not what I’d say about, say, someone who has had no job for a year and is surviving on government benefits. Because that would suggest that if in fact they had no job because they simply had no marketable skills, then I should be saying “OK, then let them starve”. Which I wouldn’t. I would say: no, we don’t let them starve, because part of being civilized is not letting people starve even if for one reason or another they’re not useful.
We might then have an argument—my hypothetical rightist and I—about whether a policy of letting some people starve results in more people working for fear of starvation, hence more prosperity, hence fewer people actually starving in the end. I hope I’d be persuadable by evidence and argument, but most likely I’d be looking for reasons to broaden the safety net and Hypothetical Rightist would be looking for reasons to narrow it. That may be because of differences in opinion about “personal responsibility” (as Lycce suggests) or in compassion (as I might suggest if feeling uncharitable) or in realism (as H.R. might suggest if feeling uncharitable) but I don’t think it has much to do with societal influence trumping individual capabilities.
I think Lycce’s analysis works better to explain left/right differences in attitudes to the conspicuously successful. H.R. might say: “look, this person has been smart and worked hard and done something people value, and deserves to be richly rewarded”. I might be more inclined to say “yes indeed, but (1) here are some other people who are as smart and hardworking and doing valuable things but much poorer and (2) this person’s success is also the result of others’ contributions”. And if you round that off to “societal influence versus individual capabilities” you’re not so far off.
In uncharitable mood, my mental model of people on the right isn’t quite “self-interested and evil” but “working for the interests of the successful”. (When in slightly less uncharitable mood, I will defend that a little—success is somewhat correlated with doing useful things, thinking clearly, not harming other people too overtly, etc., and there’s something to be said for promoting the interests of those people.)
I would guess (not very confidently) that people on the right will be more inclined to agree with Lycce’s analysis, and (one notch less confidently still) that Lycce identifies more with the right than with the left.
Apparently I have not made my point clear enough. I am indeed simplifying, “everything is due do society” and “everything is due to individuals” are the both ends but you can be anywhere in the spectrum. This is also only one point among others, probably not the main one, defining identity politics (as you told it), and surely not every leftist/rightist will have the view I give him or is even concerned by the concept.
If i take your example about the person on government benefits with no skills, a common argument is that the fact that he had poor parents, grew in a bad neighbourhood or was discriminated against is one if not the main reason he has trouble acquiring skills or finding a job, then he should not be held responsible and left alone.
I consider myself leftist (by European standard). I do think success mostly depends on things beyond the individual and that we anyway ought to help everyone, even if someone are the only one to blame for his misery (i also buy this civilized thing).
The reason to think in terms of ideological Turing test is that “opposite” is almost never correct. Almost nothing can be usefully simplified to a simple one-dimensional aspect where both ends are reasonable and common.
In the mulidimensional space of different personal influences (genetics, upbringing, current social environment, governmental and non-governmental support and constraint networks), there are likely multiple points of belief in the balance of choice vs non-choice. It’s just not useful to characterize one cluster as “opposite” of the other.
Personally, I find the three-axis model fairly compelling—it’s not that different political leanings come from different points on a dimension, it’s that they are focusing on completely different dimensions . Progressives tend to think of oppressor/oppressed, Conservatives about Barbarism/Civilisation, and Libertarians about Coercion/Freedom.
This does get accepted (to some extent—it’s still massively oversimple) by both liberal and conservative friends of mine, so passes at least one level of test.
It might well be a common argument, but the correct question is whether it’s a valid argument.
Using a less sympathetic expression this is also known as the forced redistribution of wealth. There is an issue, though, well summed up by the quote usually attributed to Margaret Thatcher: “The problem with socialism is that eventually you run out of other people’s money”.
I do think it is a valid arguments (I might be wrong of course), many studies have highlighted the effect of education, parents, genes, environment, etc. So I find it unfair to blame someone for its problems since there are too many element to consider to give an accurate judgement.
I don’t like the idea of forced redistribution of wealth (taxes, namely), but in my opinion having a part of the population living in horrible conditions if not outright starving is worse, whether they deserve it or not.
I’d wager there is enough money in the first world to give everyone a “decent” life (admittedly depends on your definition of decent, let’s say a shelter, food, education, health care and some leftovers for whatever you want to do). It is already implemented in various country and the States are not so far off in their own way so it is doable. However it is probably not be the optimal path in the long run for economic growth, I think if it is worth it (low confidence though).
Yes, but let me emphasize the important part of that argument: “then he should not be held responsible and left alone”. That’s a normative, not a descriptive claim. It is also entirely generic: every single human being should not be held responsible—right?
For how long?
You’re assuming there is a magical neverending pot of money from which you can simple grab and give out. What happens in a few years when you run out?
Fair enough, this is only my own biased opinion. It is indeed generic, I am still unsure if my position should be “mostly not responsible” or “not responsible at all” depending on which model about free will is correct.
Wealth is produced, and the money do not disappear (does it actually? my understanding of economy is pretty basic) when you give it out since they spend it as consumer the same way the people you take it from would do.
I don’t see anything “running out” in the few socialist countries out there.
The money usually does not literally disappear, but what happens if you have too much money in circulation and not enough things to buy is that the money loses value, i.e. things become more expensive. (Attempts to fix this problem by regulating prices typically result in literally empty shops after the few cheap things are sold.) It is related to inflation, but the whole story is complicated.
There are many countries in eastern Europe that once had “socialist” in their names and now don’t. And they happen to be among the poorest ones in Europe. The “running out of money” meant that over decades their standards of living were getting far behind the western Europe.
You probably mean Sweden (people who talk about “socialist” countries not running out of money usually mean Sweden, because it’s quite difficult to find another example). I don’t know much about Sweden to explain what happened there, but I suspect they have must less “socialism” than the former Soviet bloc.
(For the purposes of a rational debate it would probably be better to stop using words like “socialism” and instead talk about more specific things, such as: high taxes, planned economy, mandatory employment, censorship of media, dictatorship of one political party, universal health care, basic income, etc. These are things typically described as “socialist” but they don’t have to appear together.)
I think that, as much as having once had “socialist” in their names, may be their problem. They got screwed over by the Nazis in WW2 and then screwed over again by the USSR. I think they’d be poor now whatever their politics had been.
Again, the former Soviet bloc is distinguished by features other than socialism—notably, by having been part of the Soviet bloc. And the USSR is distinguished by features other than socialism—e.g., by totalitarianism, by having been the enemy of the US (which was always the richer superpower), etc.
On the other side, it’s not just Sweden—but also, as you say, not exactly hardcore socialism either.
That’s the whole (continental) Europe, not just Eastern.
By having specific politics imposed on them. So the “whatever their politics had been” is a non sequitur.
If by “socialism” you mean “Western social democracy”, the USSR was never socialist. And if by “socialism” you mean “communism” (which is how the Russians, etc. used the word), totalitarianism is an essential part of the package.
I do not think that was the only variety of screwage inflicted on the Soviet bloc countries by the USSR.
(And I bet imposing a particular political system on a country tends to make it less prosperous than it would have been had it adopted that political system of its own accord—because the people who have to make it work will resent it, be less motivated to make it work well, etc. So even if that were all the USSR did, I’d still expect economic damage independent of the (de)merits of the particular system they imposed.)
Actually I mean something more like “that which Western social democracies have more of than Western free-market capitalist countries, and avowed communist countries have more of again”. Or like the big bag of ideologies you’ll find on Wikipedia.
Counter-example: post-WW2 Japan (and, arguably, Western Germany as well).
Generally speaking, I’d say that “people who have to make it work will resent it” is too crude of an approach. Some people will, but some people will see it as an excellent opportunity to advance. In the case of the Soviet Union itself it’s unclear whether you can say that the political system was “imposed”—it’s not like the population had a free choice...
Yup, I’ll agree that Japan did very well after WW2 despite having democracy imposed on it. Did it do better or worse than it would have had it embraced democracy autonomously, though?
(I doubt that’s answerable with any confidence. Unfortunately we can’t figure out how much evidence the economic difficulties of Eastern Europe are against socialist economic policies without taking some view on how damaging, if at all, it is to have a political system forced on you.)
Oh yes, but what else can you expect when we’re trying to deal with big knotty political questions in short forum comments?
Given the rather clean comparison of East and West Germanies (no one asked any Germans what kind of political system would they like), I don’t understand why you are having problems figuring this out.
The DDR was AIUI imposed on much more drastically than the BRD. It was an ally of other countries that were more prosperous and powerful to begin with (most importantly the US, as Viliam’s comment about the Marshall Plan points out) whereas the DDR was their enemy.
For the avoidance of doubt, I do agree that there is very good evidence that Soviet-style communism is a less effective economic system than Western-style democratic lightly-regulated market capitalism. (And yes, the two halves of Germany make a nice comparison.) But from there to “all possible forms of socialism are bad for you” is not, so far as I can see, a step warranted by the evidence.
(The actual issue in this thread seems to have been whether the “First World” has the resources to provide everyone with ‘a “decent” life’ without running out. Lycce didn’t propose any very specific way of trying to do this, but I don’t have the impression he was wanting Soviet-style communism.)
Another huge difference was the Marshall Plan.
Basic income is historically no socialist idea. It’s a liberal idea. Milton Friedman came up with it under the name of negative taxation.
Billionaire Götz Werner did a lot to promote the concept. In Germany the CDU (right-wing) politician Dieter Althaus spoke for it. YCombinator who invests into research in it is also no socialist institution.
Socialism is about workers rights. People who don’t work but just receive basic income aren’t workers. The unemployed aren’t union members. Unions generally want that employers take care of their employees and believe that employeers should pay a living wage and that it’s not the role of the government to pay low income people a basic income.
If “not at all” won’t you have issues with e.g. the criminal justice system?
Money is just convenient tokens, you can’t consume money. What you want is value in the form of valuable (that is, desirable) goods and services. Most goods and services disappear when you consume them: if you eat a carrot, that carrot is gone.
When you give out (free) money you generate demand for goods and services. In the context of a capitalist society there is a common assumption that “the market” will automagically generate the supply (that is, actual goods and services) to satisfy the demand. However if you are not in the context of a capitalist society any more, you can’t assume that the supply will be there to meet the demand—see the example of the Soviet Union, etc.
When you redistribute money, people use that money to buy stuff. Someone has to produce the actual stuff and moving money around will not, by itself, lead to actual stuff being produced. If no one is growing carrots, there will be none to be had, free money or no free money.
In the current system people produce goods for their subsistence. Maybe if you’d give subsistence to everyone (basic income for example) and let people produce in exchange for “more”, the system would still be viable.
The advantages are nobody left out, more flexibility in your work, people doing what they like (more artist and stuff), not having to work to survive (that counts for some). It would increase the happiness of the persons concerned The disadvantages are a net loss of production compared to the current systems and the producers of good being worse off. Maybe the trade off is not worth it, I’d like to have it tried just to check.
I am indecisive, even if they are not responsible, criminals are harmful for the rest of the population so imprisonment can be necessary. However the justice system should be focus on rehabilitation rather than punishment.
Your question made me think, coming from that one could perfectly argue that since people not doing anything are harmful to the rest of the society (technically they are taking money from the productive part) so they should be forced to be productive.
Bearing that, I would be fine with giving unproductive persons incentives so they become productive. But then you have the question at how much incentive is ethically justified.
The words “loss of production” are too abstract, so it feels like it is no big deal. But it depends on what specifically it means. Maybe it’s slower internet connection, fewer computer games, and more expensive Coca Cola. Or maybe it’s higher mortality in hospitals, higher retirement age, and more poverty.
I’m saying this because I think people usually only imagine the former, but in real life it’s more likely to be both.
If you give incentives to unproductive people to become productive, but you don’t give incentives to productive people to remain productive, the winning strategy for people is to have swings of productivity.
Generally, whenever you have a cool idea that would work well for the current situation, you should think about how the situation will change when people start adapting to the new rules and optimizing for them. Because sooner or later someone will.
I am aware that very negative consequences are possible, even likely, especially if you go the whole way (aka save everyone at any cost). My stance is that the current situation is not optimal, and that trying incremental / small scale changes to see whether it makes the situation any better (or worse). Admittedly the ways it could go wrong are multiples.
If working people can afford more luxury that non-working one, this gives incentive to people starting being productive and staying so. Another incentives that would probably exist (at least in the first generations) is the peer-pressure, not working being low-status.
Yeah, impossibility to predict long term evolution is the biggest flaw of basic universal income and the like. But this is true for any significant change. That’s why we should be very careful about policies changes, but immobilsm is not the thing to do (in my opinion).
Again I am not highly confident that my opinion is the good one.
(answer to your other message)
The difference between Sweden (Denmark and France also fit the bill) and eastern European countries is that the former have an extensive welfare system, but apart from that have a capitalist economy while this not the case for the later.
For example France (the one I know the more about), if you are single and have never worked there is a “living wage” of approx 500 euros per month (only if you are more than 25 for some reason), help for housing going from 90 to ~150 euros month. Free healthcare, free public transport. If you have kids you get more help and free education but it is harder to live without working.
On the other side France is a market economy with free trade, very few state monopolies and wealth is owned by the capital.
Nope, that would be true in a subsistence economy. You don’t want to live in one :-/
In the current system people produce goods to be exchanged for money which money will be used to buy other goods.
And do you have reasons to believe that would be so—besides “maybe”?
Well, until their toilet clogged and stayed clogged because most plumbers became painters and the rest just went fishing. And until they got sick and found out that the line to see one of the few doctors left is a couple of months. And until the buses stopped running because being a bus mechanic is not such a great job and there are not enough guys who are willing to do it just for fun...
Of course. See e.g. the Soviet Union or Mao’s China: being unemployed was a crime. If you can’t find a job, the state has a nice labour camp all ready for you.
In money or bullets?
No, that’s why I’d like to see it tried. Nordic countries seems to be headed in that direction, we’ll see how it goes.
One possibility is too find a new equilibrium where the least attractive a job is, the better the advantages for doing it (since people would be ready to pay more to have it done at your place).
You forgot the second part :
This is already how it works. And In a perfect capitalistic society, you have a choice between working or starving (except if someone is willing to help you), this is not much better than bullets.
I would go for less incentives that in our current society personally.
Do you think that trying could have considerable costs? Russia tried communism, that… didn’t turn out well.
Why new? That’s precisely how the current equilibrium works (where advantages == money).
You didn’t answer the question.
Why capitalistic? In your black-and-white picture that would be true for all human societies except for socialist ones. Under capitalism you could at least live off your capital if/when you have some.
So why would anyone come to unclog your toilet?
It could, incremental changes, or doing it on a smaller case would mitigate the costs. A “partial” basic income already exist in several European countries, where even when not contributing to society you are given enough to subsist. The results are not too bad so far.
You are right, it would just be different jobs having the most value
Is any system where people are automatically given subsistence socialist? Because it is the only thing I have talked about.
Money, but with a cost for not being a producer smaller than today (aka no comfort rather than no subsistence)
For money, same as today
What non-socialist societies which unconditionally provided subsistence to all its members, sufficient to live on, do you know other than a few oil-rich sheikhdoms?
(for the ideological turing test)
I have tried to make my argument as neutral as possible, giving both sides of the arguments and avoiding depreciating any,
Let’s try from both directions then (personally am a leftist).
Left side, I think so, I definitely think societal influence (amongst other things out of the individual power such as genetics) trumps individual choices, I also saw this opinion amongst friends and intellectuals so I am not alone in this, not everybody on the left think like this though.
Right side, my model of the right is not as good as I’d like, but i have seen it expressed in various places. Again it does not concern all the rightists neither is the main point for everyone.
Sorry but I’m not sure I understand what you are talking about, could you develop your point?
One way of thinking about this is “would my enemies, if reading this, think it is a description of their beliefs written by an ally?”
I’m not sure of the relevance in this instance.
I downvoted the post for it being a political post on LW that tries to explain complex politics with a simple model.
Thank you for the feedback. Unfortunately it looks like I have not been able to express myself clearly.
It was not supposed to explain anything but rather gives one point I find not stressed enough, I am aware that it does not sum up politics or gives a full distinction between political side.
I don’t think that the general class of posts “Political idea XY with whom I just came up isn’t mentioned enough in the venues I read” makes a good LW post.
“This is the type of arrant pedantry up with which I will not put!”
Still, it would be very wrong to describe rightists as thinking that everyone should starve who can’t support themselves. Many people on the political right also practice and/or believe in charity.
As a rightist myself I’d like to point out that there is a massive difference in our belief system between being forced to support folks who don’t work (you are a slave, changing this intolerable state is the primary goal of your life) and choosing to do so (a righteous act, golf claps).
And I’d like to point out that there is a massive difference between maybe getting charitable support that keeps you alive and having a right to welfare. You don’t know you going to be in the position of the giver from behind a veil.
I think this subthread is a good summary of why we should just leave politics out of LW, and why trying to summarize a single dimension of difference is hopeless.
So I’ll continue :) Here goes the anti-turing definition (each side will agree it applies to the other, but not to themselves):
Progressives/leftists believe it’s OK to define rights over things that don’t exist yet (say, food that isn’t yet planted or care from a future doctor who might prefer to golf that day instead of exposing himself to your disease). The conservatives/rightists think it’s OK to define rights that make it easy to ignore others’ suffering.
No, leftists thinks you have rights to things, not over things. Insisting that a right can only be over something pretty well begs the question in favour of property rights.
I don’t understand this—it doesn’t make sense to me.
It was my attempt to rephrase the “massive difference” posts by WalterL and TheAncientGreek, above.
WalterL taking the rightist side, asserting a right to freedom from coercion and that being forced to support others is a form of slavery. TheAncientGreek takes the leftist side in asserting a right to welfare being far preferable than a charitable state of support.
These rights are in direct conflict. Person A’s right to welfare requires that person B is mandated to provide it. Person B’s right to choose her own activities implies that person A might not get fed or housed.
Then or was completely wrong. I was drawing a distinction between he kind of outlook you might have if you know you are in a winning position, and the kind you might take if you don’t know what position you are going to be in,
Um, to quote TheAncientGeek, “there is a massive difference between maybe getting getting charitable support that keeps you alive and having a right to welfare”—I think you misunderstand him.
But still, how is the right to welfare a right “over things that don’t exist yet” and how is the right to be not taxed (more or less) a right that “make[s] it easy to ignore others’ suffering”?
The first is the right to support and the matching duty falls onto the government. It could be (see Saudi Arabia) that it can provide this support without taking money out of any individuals’ pockets. The second is basically a property right and has nothing to do with the ease of ignoring suffering.
Perhaps I do misunderstand him. I took his “massive difference” comparison to mean that he doesn’t believe charity is sufficient, and he would prefer welfare to be considered a right.
In the long term, the government is just a conduit—it matches and enforces transfers, it doesn’t generate anything itself. The case of states that can sell resources is perhaps an exception for some time periods, but doesn’t generalize in the way most people think of rights independent of local or temporal situations.
In any case, a right to support directly requires SOMEONE to provide that support, doesn’t it? If everyone is allowed to choose not to provide that support, the suffering must be accepted.
That what I meant , butit it has nothing to with things that don’t yet exist.
So, can we just get rid of it, then? :-/ I don’t think we should take a detour into this area, but, let’s say, a claim that government does not create any economic value would be… controversial.
Yes, correct. All rights come as pairs of right and duty. Whatever is someone’s right is someone else’s duty.
I’m still confused about “rights over things that don’t exist yet” and “rights that make it easy to ignore”.
Asserting a right to eat is not just a statement about current food supply ownership or access. It’s saying that, if food is later created, the right applies to that too. Conversely, if I have the right not to grow food or not to give it to someone else, I am allowed to ignore their pain.
Don’t most rights work this way? I think it’s just the default.
I don’t quite understand the “allowed to ignore” part. What is the alternative, Clockwork Orange-style therapy?
“I am allowed to X” in this context means “X is not worthy of moral condemnation, and forcibly stopping X is worthy of moral condemnation”.
Moral condemnation or application of force are the common responses.
I would guess that people on the political right are more likely to donate to charity than people on the political left.
At least when I look at people around me, those on the left are more likely to say “why should I care about this problem; isn’t this one of those things that government should do?”. And those on extreme left will even say something about how ‘worse is better’ because it will make the capitalist system collapse sooner, while donating to alleviate problems delays the revolution.
This analysis suggests that any relationship between political affiliation and charitable donation isn’t very strong. For what it’s worth, the sign of the coefficient in the regression suggests that lefties give more than righties. (The paper also looks at volunteering, and finds that lefties volunteer quite a lot more than righties.)
I wouldn’t make any large bets on the basis of that paper, though. There are lots of interrelated things here—politics, wealth, religion, etc., etc., etc. -- and even if those regression coefficients indicate something real rather than just noise it may be much more complicated than “group X is more generous with their time/money than group Y”. And it looks like it’s the work of a single inexperienced researcher, and doesn’t seem to be a peer-reviewed publication.
This paper—not available for free, but there’s an informal writeup by someone else here says that other research has indicated that righties give more than lefties (contrary to what the paper above says), and purports to explain this by saying that righties are more religious and the religious give more. More precisely, it looks as if religion leads to giving in two ways. There’s giving to religious charities, which obviously religious people do a lot more of than irreligious ones; and there’s other giving, which church attenders do and so (to a comparable extent) do people involved in other sorts of socially-conscious meeting up. (“Local civic or educational meetings” is the thing they actually looked at.)
If you control for religion, then allegedly the left/right differences largely go away.
Make of all that what you will. (What I make of it is: it’s complicated.)
“charity” is a political term that makes measuring this very difficult. If you count donations to private-charity art museums and to activism/signaling groups rather than only looking at poverty impact, you’ll get results that don’t really tell you much about useful donations.