Open Thread, May 19 − 25, 2014
You know the drill—If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.
I’m reading the “You’re Calling Who A Cult Leader?” again, and now the answer seems obvious.
“I publicly express strong admiration towards the work of Person X.”—What could possibly be wrong about this? Why are our instincts screaming at us not to do this?
Well, assigning a very high status to someone else is dangerous for pretty much the same reason as assigning a very high status to yourself. (With possible exception if the person you admire happens to be the leader of the whole tribe. Even so, who are you to speak about such topics? As if your opinion had any meaning.) You are challenging the power balance in the tribe. Only instead of saying “Down with the current tribe leader; I should be the new leader!” you say “Down with the current tribe leader; my friend here should be the new leader!”
Either way, the current tribe leader is not going to like. Neither his allies. Neither neutral people, who merely want to prevent another internal fight where they have nothing to gain. All of them will tell you to shut up.
There is nothing bad per se about suggesting that e.g. Douglas R. Hofstadter should be the king of the nonconformist tribe. Maybe we can’t unite behind this king, but neither can we unite behind any competitor, so… why not. At worst, some of us will ignore him.
The problem is, we live in a context of a larger society that merely tolerates us, and we know it. Praise Hofstadter too high and someone outside of our circle may notice it. And suddenly the rest of the tribe might decide that it is going to get rid of our ill-mannered faction once and for all. (Not really, but this is what would happen in the ancient jungle.) So we better police ourselves… unless we are ready to take the fight with the current leadership.
Being a strong fan of Douglas R. Hofstadter means challenging those who are strong fans of e.g. Brad Pitt. There is only so much place at the top of the status ladder, and our group is not strong enough to nominate even the highest-status one among us. So we rather not act like we are ready for open confrontation.
The irony is that if Douglas Hofstadter or Paul Graham or Eliezer Yudkowsky actually had their small cults, if they acted like dictators within the cult and ignored the rest of the world, the rest of the world would not care about them. Maybe people would even invent rationalizations about why everything is okay, and why anyone is free to follow anyone or anything. -- The problem starts with suggesting that they could somehow be important in the outside world; that the outside world has a reason to listen to them. That upsets people; the power change that might concern them. Cultish behavior well-contained within the cult doesn’t. Saying that all nerds should read Hofstadter, that’s okay. -- Saying that even non-nerds lose something valuable when they don’t read something written by a member of our faction… now that’s a battle call. (Are you suggesting that Hofstadter deserves a similar status to e.g. Dostoyevsky? Are you insane or what? Look at the size of your faction, our faction, and think again.)
I was talking to the loved one about this last night. She is going for ministry in the Church of England. (Yes, I remain a skeptical atheist.)
She is very charismatic (despite her introversion) and has the superpower of convincing people. I can just picture her standing up in front of a crowd and explaining to them how black is white, and the crowd each nodding their heads and saying “you know, when you think about it, black really is white …” She often leads her Bible study group (the sort with several translations to hand and at least one person who can quote the original Greek) and all sorts of people—of all sorts of intelligence levels and all sorts of actual depths of thinking—get really convinced of her viewpoint on whatever the matter is.
The thing is, you can form a cult by accident. Something that looks very like one from the outside, anyway. If you have a string of odd ideas, and you’re charismatic and convincing, you can explain your odd ideas to people and they’ll take on your chain of logic, approximately cut’n’pasting them into their minds and then thinking of them as their own thoughts. This can result in a pile of people who have a shared set of odd beliefs, which looks pretty damn cultish from the outside. Note this requires no intention.
As I said to her, “The only thing stopping you from being L. Ron Hubbard is that you don’t want to. You better hope that’s enough.”
(Phygs look like regular pigs, but with yellow wings.)
I think you’re overcomplicating it. People like Eliezer Yudkowsky and Paul Graham are certainly not cult leaders, but they have many strong opinions that are well outside the mainstream; they don’t believe in, and in fact actively scorn, hedging/softening their expression of these opinions; and they have many readers, a visible subset of whom uncritically pattern all their opinions, mainstream or not, after them.
And pushback against excitement over Hofstadter can stem from legitimate disagreement about the importance/interestingness of his work. The pushback is proportional to the excitement that incites it.
Disagreed. IMO, there should only be kings if there’s a good reason… among other things, I suspect that status differences are epistemologically harmful. See Stanley Milgram’s research and the Asch conformity experiment.
I also disagree with the rest of your analysis. I anticipate a different sense of internal revulsion when someone starts talking to me about why Sun Myung Moon is super great vs why Mike Huckabee is so great or why LeBron James is so great. In the case of LW, I think people whose intuitions say “cult” are correct to a small degree… LW does seem a tad insular, groupthink-ish, and cultish to me, though it’s still one of my favorite websites. And FWIW, I would prefer that people who think LW seems cultish help us improve (by contributing intelligent dissent and exposing us to novel outside thinking) instead of writing us off.
(The most charitable interpretation of the flaws I see in LW is that they are characteristics that trade off against some other things we value. E.g. if we downvoted sloppy criticism of LW canon less, that would mean we’d get more criticism of LW canon, both sloppy and high-quality… not clear whether this would be good or not, though I’m leaning towards it being good. A less charitable interpretation is that the length of the sequences produces some kind of hazing effect. Personally, I haven’t finished the sequences, don’t intend to, think they’re needlessly verbose, and would like to see them compressed.)
I’ve recently been subject to sloppy criticism of “weird ideas” (e.g. transhumanism) and the sloppy criticism is always the same. At this point I’d look forward to high-quality criticism, but I’m not willing to suffer again and again through the sloppy parts for it.
If people want to provide high-quality criticism, they should be rewarded for it (in this case, with upvotes and polite conversation). Sloppy criticism remains low-quality content and should not be rewarded.
Makes sense. I still think the bar should be a bit lower for criticism, for a couple reasons.
Motivated reasoning means that we’ll look harder for flaws in a critical piece, all else equal. So our estimation of post quality is biased.
Good disagreement is more valuable than good agreement, because it’s more likely to cause valuable updates. But the person writing a post can only give a rough estimate of its quality before posting it. (Dunning-Kruger effect, unknown unknowns, etc.) Intuitively their subconscious will make some kind of “expected social reward” calculation that looks like
Because of human tendencies,
social_punishment_for_sloppy_criticism
is going to be higher than the correspondingsocial_punishment_for_sloppy_agreement
parameter in the corresponding equation for agreement.If
social_punishment_for_sloppy_criticism
is decreased, on, the margin, that will increase the expected values of this calculation, which means that more quality criticism will get through and be posted. LW users will infer these penalties by observing voting behavior on the posts they see, so it makes sense to go a bit easy on sloppy critical posts from a counterfactual perspective. Different users will interpret social reward/punishment differently, with some much more risk-averse than others. My guess is that the most common mechanism by which low expected social reward will manifest itself is procrastination on writing the post… I wouldn’t be surprised if there are a number of high-quality critical pieces of LW that haven’t been written yet because their writer is procrastinating due to an ugh field around possible rejection.(I know intelligent people will disagree with me on this, so I thought I’d make my reasoning a bit more formal/explicit to give them something to attack.)
A good solution could be to just not downvote sloppy criticism. No reward, but also no punishment.
I’m not sure about this—the “Yay Hofstadter” team looks about as big as the “Yay Dostoyevsky” team, at least especially in the anglophone internet.
Bad example, perhaps. Try some big names from anglophone literature.
Shakespeare? Okay, maybe too old. Gone with the Wind? Something that is officially blessed and taught at schools as the literature. Something that perhaps not many people enjoy, but almost everyone perceives that is has an officially high status. The thing you suggest should be replaced by Hofstadter.
A few questions about cryonics I could not find answers to online.
What is the fraction of deceased cryo subscribers who got preserved at all? Of those who are, how long after clinical death? Say, within 15 min? 1 hour? 6 hours? 24 hours? Later than that? With/without other remains preservation measures in the interim?
Alcor appears to list all its cases at http://www.alcor.org/cases.html , and Ci at http://198.170.115.106/refs.html#cases , though the last few case links are dead. So, at least some of the statistics can be extracted. However, it is not clear whether failures to preserve are listed anywhere.
Some other relevant questions which I could not find answers to:
How often do cryo memberships lapse and for what reasons?
How successful are last-minute cryo requests from non-subscribers?
Bad news, guys—we’re probably all charismatic psychotics; from “The Breivik case and what psychiatrists can learn from it”, Melle 2013:
It’s a good thing Breivik didn’t bring up cryonics.
The sanitised LW feedback survey results are here: https://docs.google.com/spreadsheet/ccc?key=0Aq1YuBYXaqWNdDhQQmQ3emNEOEc0MUFtRmd0bV9ZYUE&usp=sharing
I’ll be writing up an analysis of results, but that takes time.
Locations that received feedback:
(1) Amsterdam, Netherlands
(3) Austin, TX
(5) Berkeley, CA
(1*) Berlin, Germany
(1*) Boston, MA
(4) Brussels, Belgium
(2) Cambridge, MA
(2) Cambridge, UK
(3) Chicago, IL
(1) Hamburg, Germany
(3) Helsinki, Finland
(14) London [None of them specified which London, but my current guess is that they all meant London, UK.]
(3) Los Angeles, CA
(4) Melbourne, Australia
(2) Montreal, QC
(1) Moscow, Russia
(1) Mountain View, CA
(5) New York City
(1**) Ottawa, ON
(1) Philadelphia, PA
(1*) Phoenix, AZ
(1) Portland, OR
(1**) Princeton, NY
(1*) San Diego, CA
(2) Seattle, WA
(2) Sydney, Australia
(1) Toronto, ON
(2) Utrecht, Netherlands
(3) Washington, DC
(1) No local meetup
(9) Not given
(*) means the feedback is from someone who hasn’t attended because it’s too far away, so seeing the specific response is probably not very helpful. (**) means the group name is written in the public results, so you can just search for it to find your feedback.
There were 78 responses, and four of them listed two or more cities, so these sum to 82.
If you organize one of these groups, and haven’t already done so, please get in touch so I can send your feedback to you! (Or if you’d rather not receive it, it would be helpful if you could let me know that as well, so that I don’t spend time trying to track you down.) I haven’t yet sent anyone their feedback, and don’t promise that I’ll do it super quickly, but it will happen.
Scott Aaronson isn’t convinced by Giulio Tononi’s integrated information theory for consciousness.
DragonBox has been mentioned on this site a few times, so I figured that people might be interested knowing in that its makers have come up with a new geometry game, Elements. It’s currently available for Android and iOS platforms.
Geometry used to be my least favorite part of math and as a result, I hardly remember any of it. Playing this game with that background is weird: I don’t really have a clue of what I’m doing or what the different powers represent, but they do have a clear logic to them, and now that I’m not playing, I find myself automatically looking for triangles and quadrilaterals (had to look up that word!) in everything that I see. Plus figuring out what the powers do represent makes for an interesting exercise.
I’d be curious to hear comments from anyone who was already familiar with Euclid before this.
Not an expert, but Euclid made some mistakes, like using superposition to prove some theorems. I’m curious how they handle those. (e.g. I think Euclid attempted to prove side-angle-side congruence, but Hilbert had to include it as an axiom.)
I have the privilege of working with a small group of young (12-14) highly gifted math students for 45 minutes a week for the next 5 weeks. I have extraordinary freedom with what we cover. Mathematically, we’ve covered some game theory and Bayes’ theorem. I’ve also had a chance to discuss some non-mathy things, like Anki.
I only found out about Anki after I’d taken a bunch of courses, and I’ve had to spend a bunch of time restudying everything I’d previously learned and forgotten. It would have been really nice if someone had told me about Anki when I was 12.
So, what I want to ask Lesswrong, since I suspect most of you are like the kids I’m working with except older, is what blind spots did 12-14-year-old you have I could point out to the kids I’m working with?
Heh, if I was 12-14 these days, the main message I would send to me would be: Start making and publishing mobile games while you have a lot of free time, so when you finish university, you have enough passive income that you don’t have to take a job, because having a job destroys your most precious resources: time and energy.
(And a hyperlink or two to some PUA blogs. Yeah, I know some people object against this, but this is what I would definitely send to myself. Sending it to other kids would be more problematic.)
I would recommend Anki only for learning languages. For other things I would recommend writing notes (text documents); although this advice may be too me-optimized. One computer directory called “knowledge”, subdirectories per subject, files per topic—that’s a good starting structure; you can change it later, if you need. But making notes becomes really important at the university level.
I would stress the importance of other things than math. Gifted kids sometimes focus on their strong skills, and ignore their weak skills—they put all their attention to where they receive praise. This is a big mistake. However, saying this without providing actionable advice does not help. For example, my weak spots were exercise and social skills. For social skills a list of recommended books could help; with emphasis that I should not only read the books, but also practice what I learned. For exercise, a simple routine plus HabitRPG could do the job. Maybe to emphasise that I should not focus on how I compare with others, but how I compare with yesterday’s me.
Something about an importance of keeping contact with smart people, and insanity of the world in general. As a smart person, talking with other smart people increases your powers: both because you develop with them the ideas you understand, and because you can ask them about things you don’t understand. (A stupid person will not understand what you are saying, and will give you harmful advice about things you asked.) In school you are supposed to work alone, but in real life a lot of success is achieved by teams; but the best teams are composed of good people, not of random people.
Another advice that is risky to give to other kids: Religion is bullshit and a waste of time. People will try to manipulate you, using lies and emotional pressure. Whatever other positive traits they have, try to find other people that have the same positive traits, but without the mental poison; even if it takes more time, it’s worth it.
Social capital is important. Build it.
Peer pressure is far more common and far more powerful than you think. Find an ingroup that puts it to constructive ends.
Don’t major in a non-STEM field. College is job training and a networking opportunity. Act accordingly.
Something about time management, pattern-setting, and motivation management—none of which I’ve managed to learn yet.
Some actionable advice: Keep written notes about people (don’t let them know about that). For every person, create a file that will contain their name, e-mail, web page, facebook link, etc., and the information about their hobbies, what you did together, whom they know, etc. Plus a photo.
This will come very useful if you haven’t been in contact with the person for years, and want to reconnect. (Read the whole file before you call them, and read it again before you meet them.) Bonus points if you can make the information searchable, so you can ask queries like “Who can speak Japanese?” or “Who can program in Ruby?”.
This may feel a bit creepy, but many companies and entrepreneurs do something similar, and it brings them profit. And the people on the other side like it (at least if they don’t suspect you to use a system for this). Simply think about your hard disk as your extended memory. There would be nothing wrong or creepy if you simply remembered all this stuff; and there are people with better memory who would.
Maybe make some schedule to reconnect with each person once in a few years, so they don’t forget you completely. This also gives you an opportunity to update the info.
If you start doing it while young, your high-school and university classmates will already make a decent database. Then add your colleagues. You will appreciate it ten years later, when you would naturally forget most of them.
When you have a decent database, you can provide useful social service by connecting people. -- Your friend X asks you: “Do you know something who can program in Ruby?” “Uhm, not sure, but let me make a note and I’ll think about it.” Go home, look at the database. Find Y. Ask Y whether it is okay to give their contact to someone interested in Ruby. Give X contact to Y. At this moment, your friend X owes you a favor, and if X and Y do some successful business, also Y owes you a favor. The cost of you is virtually zero; apart from costs of maintaining the database, which you would do anyway.
An important note is that of course there is a huge difference between close friends and random acquaintances, but both can be useful in some situations, so you want to keep a database for both. Don’t be selective. If your database has too much people, think about better navigation, but don’t remove items.
I’m inclined to ask: Are there ready-made software solutions for this or should I roll my own in Python or some office program? If it wasn’t for the secretive factor I’d write a simple program to put on my github and show off programming skills.
I don’t know. But if I really did it (instead of just talking that this is the wise thing to do), I would probably use some offline wiki software. Preferably open source. Or at least something I can easily extract data from if I change my mind later.
I would use something like wiki—nodes connected by hyperlinks—because I tried this in the past with hierarchical structure, and it didn’t work well. Sometimes a person is a member of multiple groups, which makes classification difficult. Or if you have a few dozen people in the database, it becomes difficult to navigate (which in turn becomes a trivial inconvenience for adding more people, which defeats the whole purpose).
But if every person (important or unimportant) has their own node, and you also create nodes for groups (e.g. former high school classmates, former colleagues from company X, rationalists,...), you can find anyone with two clicks: click on the category, click on the name. Also the hyperlinks would be useful to describe how people are connected with each other. It would be also nice to have automatic collections of nodes that have some atrribute (e.g. can program in Ruby); but you can manually add the links in both directions.
A few years ago I looked at some existing software, a lot of it was nice, but missed a feature or two I considered important. (For example, didn’t support Unicode, or required web server, or just contained too many bugs.) In hindsight, if I would just use one of them, for example the one that didn’t support Unicode, it would still be better than not having any.
Writing your own program… uhm, consider planning fallacy. Is this the best way to use your time? And by the way, if you do something like that, make it a general-purpose offline Unicode wiki-like editor, so that people can also use it for many other things.
ISTR there’s something in the Evernote family that does this.
Downvoted for dismissing the humanities.
One can read in one’s spare time or learn languages or act. If one does not come from wealth not majoring in something remunerative in college is a mistake if you will actually want money later.
He didn’t dismiss the humanities he said studying them at university was a poor decision.
Moreover, it wasn’t really presented as general advice, but advice for their own younger version. It’s not generally applicable advice (not everyone will be happy or successful in STEM fields), but I think it’s safe to assume it is sound advice for Young!nydwracu.
Or even if it was intended as generally applicable advice, it’s still directed at kids gifted at mathematics, who will have a high likelihood of enjoying STEM fields.
My parents made me study business management instead of literature. My life has been much more boring and unfulfilling as a result, because the jobs I can apply for don’t interest me, and the jobs I want demand qualifications I lack. In my personal experience, working in your passion beats working for the money.
How sure are you what your life would have been like if you had studied literature instead?
Why haven’t you gone back to college for a Masters in English Literature or something along those lines? Robin Hanson was 35 before he got his Ph.D. in Economics and he’s doing ok. The market for humanities scholars is not as forgiving as that for Economics but that’s what you want, right?
After some years of self-analysis and odd jobs, I’m close to finishing a second degree in journalism.
The implicit claim that humanities jobs are uniformly non-remunerative seems difficult to support.
How about doing a humanities major to make connections to people who are any combination of rich, creative, or interesting and teaching yourself to program in the meantime?
There’s a difference between choosing a subject as your college major (which amounts to future employment signalling) and engaging in the study of a subject.
It was a blind spot that I had until my senior year of college, when I realized that I wanted to make a lot of money, and that it was very unlikely that majoring in philosophy would let me do so. Had I realized this at 12-14, I would’ve saved myself a lot of time; but I didn’t, so I’m probably going to have to go back for another degree.
If you don’t care about money or you have the connections to succeed with a non-STEM degree, that’s another thing. But that’s not the question that was asked.
I never learned how to put forth effort, because I didn’t need to do so until after I graduated high school.
I got into recurring suboptimal ruts, sometimes due to outside forces, sometimes due to me not being agenty enough, that eroded my conscientiousness to the point that I’m quite terrified about my power (or lack there of) to get back to the level of ability I had at 12-14.
I suppose, if I had to give my younger self advice in the form of a sound-byte, it’d be something like: “If you aren’t—maybe at least monthly—frustrated, or making less progress than you’d like, you aren’t really trying; you’re winning at easy mode, and Hard Mode is likely to capture you unprepared. Of course, zero progress is bad, too, so pick your battles accordingly.”
Also, even if you’re on a reasonable difficulty setting, it pays to look ahead and make sure you aren’t missing anything important. My high school calculus teacher missed some notational standards in spite of grasping the math, and her first college-level course started off painful for it; I completely missed the cross and dot products in spite of an impressive math and physics High school transcript, and it turns out those matter a good deal (and at the time, the internet was very unsympathetic when I tried researching them).
Speaking as a somewhat gifted seventeen year old, I’d really like to have known about AoPS, HPMOR and the Sequences.
Also, I’d like to have had in my mind the notion that my formal education is not optimised for me, and that I really need to optimise it myself. Speaking more concretely, I think that most teenagers in Britain pick their A Levels (if they do them at all) based on what classes the other people around them are doing, which isn’t very useful. Speaking to a friend though, I realised that when he was picking his third A Level to study, there was no other A Level he needed to study to get into his main area of specialisation (jazz musician), and his time would be better spent not doing the A level at all; he needed to think more meta. He was just doing an A level because that’s what everyone seems to think you should do. I’m about to give up a class because it’s not going to help me get anywhere, I can use the time better and learn what I want to better alone anyway. So, really optimise.
Don’t know if that helps. And AoPS is ridiculously useful.
Instill the importance of a mastery orientation (basically, optimizing for developing skills rather than proving one’s innate ability). My 12-14 year old self had such a strong performance orientation as to consider things like mnemonics and study skills to be only for the lazy and stupid. Anyone stuck in the performance orientation won’t even be receptive to things like Anki.
This. My upbringing screwed me up horribly in this respect.
I had these blind spots as a 20some year old, so I assume I had them when I was 12-14 too:
I assumed that if I was good at something, I would be good at it forever. Turns out skills atrophy over time. Surprise! (This seems similar to your Anki revelation.)
I am agenty. I had no concept of the possibility that I might be able to cause* some meaningful effect outside my immediate circle of interaction.
* I did, of course, daydream about becoming rich and famous through no fault of my own; I wouldn’t say I actually expected this to happen, but I thought it was more likely than becoming rich and famous under my own steam.
I have been in such a program when I was 12-14 (run by the William Stern foudnation in Hamburg, Germany) and the curriculum consisted mostly of very diverse ‘math’ problems prepared in a way to make them accessible to us in a func way without introducing too much up-front terminology or notation. Examples I remember of the spot:
turing machines (dresses as short-sighted busy beavers)
generalized Nim really with lots of matches
tilings of the plane
conveys game of life (easy on paper)
More I just looked up in an older folder:
distance metrics on a graph
multi-way balances
continuous fractions (cool for approximations; I still use this)
logical derivations about beliefs of people whose dream are indistinuishable from reality
generalized magical squares
Fibinacci sequences and http://en.wikipedia.org/wiki/Missing_square_puzzle
Drawing fractals (the iterated function ones; with printouts of some)
In general only an exposition was given and no task to solve. Or some introductory initial questions, The patterns to be detected were the primary reward.
We were not introduced to really practical applications but I’m unsure whether that had been helpful or rather whether it had been interesting. My interest at that time stemmed from the material being systematic patterns that I could approach abstractly and symbolically and ‘solve’. I’m not clear whether the Sequences would have been interesting in that way. Their patterns are clear only in hindsight.
What should work is Bayes rule—at least in the form that can be visualized (tiling of the 1⁄1 grid) or symbolcally derived easily.
Also guessing and calibration games should work. You can also take standard games and add some layer of complexity on them (but please not arbitrary but helpful ones; a minimum example is: Play Uno but cards don’t have to match color+number but some number theoretic identity e.g. +(2,5) modulo (4,10)).
I assume you mean Conway’s game of life.
Yes of course. That and we tried variations of the rule-set. We also discovered the flyer.
It is interesting what can come out of this seed. When I later had an Atari I wrote an optimized simulator in assembly which aggregated over multiple cells and I even tried to use the blitter reducing the number of clock cycles per cell as far as I could. This seed become a part of the mosaic of concepts that sits behind understanding complex processes now.
That sounds interesting. Would you care to elaborate?
The story goes as follows (translated from German):
“Once I dreamed that there was an island called “the island of dreams”. The inhabitants of the island dreamed very vivid and lucid. Indeed the imaginations which occurred during sleep are as clear and present as perceived during waking. Even more their dreamlife follows from night to night the same continuity as their waking perception during the day. Consequently some inhabitants have difficulties to distinguish whether they are awake or asleep.
Now every inhabitant belongs to one of two groups: Day-type and night-type. The inhabitants of day-type are characterized by their thinking during the day being true and during the night being false. For the night-type it is the opposite: Their thoughts during sleep are true and those during waking are false.”
Questions:
Once an inhabitant though/believed that he belonged to the day-type. Kann be tested whether this is true? Was be awake or asleep at the time of the thought?
...
I think most of my blindspots before roughly the age of 18 involved not understanding that I’m personally responsible for my success and the extent of my knowledge and that “good enough” doesn’t cut it. If I were to send a message back to 14-year-old!Me, I’d tell him that he has a lot of potential, but that he can’t rely on others to fulfill that potential.
I don’t know how much of this falls under your remit, but I had quite a few educational blind-spots I inherited from my parents, who didn’t come from a higher-educated background. If any of your students are in a similar position, it’s worth checking they don’t have any ludicrous expectations out of the next several years of education which no-one close to them is in a position to correct.
Blind spots such as?
I’m not sure any specific examples from my own experience would generalise very well.
If I were to translate my comment into a specific piece of generally-applicable advice, it would be to give students a realistic overview of what their forthcoming formal education involves, what it expects from them, and what options they have available.
As mentioned, this may be outside of the OP’s remit.
The specific examples may not be used, but would clarify what sort of thing you’re talking about.
One example: certain scholastic activities are simply less important than others. If your model is “everything given to me by an authority figure is equally important”, you don’t manage your workload so well.
Just curious—are you teaching at a math camp? Which one? (I have a lot of friends from Canada/USA Mathcamp, although I didn’t go myself.)
No. I know one of my former teachers outside of school, and we decided it would be a good thing if I ran an afterschool program for the mathcounts kids after it had ended.
Where is somewhere to go for decent discussion on the internet? I’m tired of how intellectually mediocre reddit is, but this place is kind of dead.
Alternative: Liven up Less Wrong. I’m not sure how to do that, but it’s possible solution to your problem.
If you want to make LW livelier, you should downvote less on the margin… downvoting disincentivizes posting. It makes sense to downvote if there’s lots of content and you want to help other people cut through the crap. But if there’s too little it’s arguably less useful.
Also develop your interesting thoughts and create posts out of them.
Slate Star Codex comments have smart people and a significant overlap with LW, but the interface isn’t great (comment threading stops after it gets to a certain level of depth, etc). Alternatively, it may help to be more selective on reddit—no default subreddits, for example.
Check out metafilter.
Its survival is in doubt. In particular, “The site is currently and has been for several months operating at a significant loss. If nothing were to change, MeFi would defaulting on bills and hitting bankruptcy by mid-summer.”
Also looking for LW replacement, with no current success.
This question occasionally comes up on #lesswrong, too, especially given the perceived decline in the quality of LW discussions in the last year or so. There are various stackoverflow-based sites for quality discussions of very specific topics, but I am not aware of anything more general. Various subreddits unfortunately tend to be swarmed by inanity.
So LW but bigger? I think you are out of luck there.
This just struck me: people always credit WWII as being the thing that got the US out of the great depression. We’ve all seen the graph (like the one at the top of this paper) where standard of living drops precipitously during the great depression then more than recovers during WWII.
How in the world did that work? Why is it that suddenly pouring huge resources out of the country into a massive utility-sink that didn’t exist until the start of the war rapidly brought up the standard of living? This makes no sense to me.
The only plausible explanation I can think up is that they somehow borrowed from the future using the necessities of war as justification. I feel like that would involve a dip in the growth rate after WWII—and there is one, but it just dips back down to the trend-line not below like I would expect if they genuinely borrowed enough from the future to offset such a large downturn as the great depression. The only other thing seems to be externalities.
However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It’s either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).
Or I’m wrong, of course. So what is it?
(Bonus question: why didn’t WWI see a similar boost in living standards?)
It didn’t. This is the argument in image form, and you can find similar ones for employment (basically, when you conscript people, unemployment goes down. Shocking!). There are lots of libertarian articles on the subject—this might be an alright introduction—but the basic argument is that standards of living dropped (that’s what happens when food is rationed and metal is used for tanks instead of cars or household appliances) but the government spending on bombs and soldiers made the GDP numbers go up, and then the post-war boost in standards of living was mostly due to deferred spending.
Note: as the article implies, the above viewpoint is not representative of mainstream economic consensus.
What tgb stated above was factually incorrect—WWII did not increase living standards. While most economists credit WWII with kickstarting GDP growth and cutting unemployment, I don’t know anyone who would actually argue that living standards rose during WWII.
Krugman doesn’t quiiiite come out and say it, but he sure seems to want the reader to infer that living standards rose: http://krugman.blogs.nytimes.com/2011/08/15/oh-what-a-lovely-war/ And in that article, he quotes and quote of Rick Perry’s book saying that the recovery happened because of WW2 (due to forcing FDR to “unleash private enterprise”, oddly).
So maybe no one actually makes that argument, but boy it’s common for people (economists and politicians!) to imply it. (Look at the contortions Perry goes through to not have to refute it!) It’s always nice to notice the confusion a cached thought should have made all along.
I think you’re reading way too much into Krugman’s argument. I don’t read Krugman as trying to imply that living standards rose during WWII. He doesn’t even mention living standards. When economists talk about ending a recession or ending a depression, they mean something technical. Krugman was just talking about increased production and lowered unemployment, etc.
Frankly it seems bizarre to me that anyone would believe that crashing consumer spending + mass shortages = better living standards. It is fair to say that people had a better attitude about their economic deprivation, since it had a patriotic purpose in serving the war effort.
I think it’s clear that you know more about what economists mean than I do, but when the typical person hears that a depression is ending, they imagine people being happier than they were before. I’m not really claiming that anyone thinks that crashing consumer spending + mass shortages = better living standards, just that the average Joe in the US hears about the depression ending and not about those negative things.
Anyway, not sure what point I’m trying to make since I think you already know what I’m saying.
One simple model which seems to fit the “WWII ending the depression” piece of data (and which might have some overlap with the truth) is that it’s relatively difficult to put idle resources into use, and significantly easier to repurpose resources that have been in use for other uses.
During the depression, a bunch of people were unemployed, factories were not running, storefronts were empty, etc. According to this model, under those economic conditions there were significant barriers to taking those idle resources and putting them to productive use.
Then WWII came and forced the country to mobilize and put those resources to use (even if that use was just to make stuff which would be shipped off to Europe and the Pacific to be destroyed). Once the war was over, those resources which had been devoted to war could be repurposed (with relatively little friction) to uses with a much more positive effect on people’s standard of living. So things became good according to meaningful metrics like living standards, not merely according to metrics like unemployment rate or total output which ignore the fact that building a tank to send to war isn’t valuable in the same way as building a car for local consumers.
The glaring open question here is why there might be this asymmetry between putting idle resources to use and repurposing in-use resources. Which is closely related to the question of why recessions/depressions exist at all (as more than momentary blips): once a recession hits and bunch of people become unemployed (and other resources go idle), why doesn’t the market immediately jump in to snap up those idle resources? This article gets into some of the attempts to answer those questions.
(Bonus answer: World War One did not happen during a depression, so mobilizing for war mostly involved repurposing resources which had served other uses in peacetime rather than bringing idle resources into use.)
I like that this explanation gives a good reason for why this kind of spending could only work to fix a depression or similar situation versus always inflating standards of living. Thanks.
I’m not sure how much it influenced the overall picture, but there was quite a brain drain to the US before and during WWII (mostly Jewish refugees) as well as after (Wernher von Braun and the like). Migrating away from the Nazi and Stalinist spheres of influence demonstrates intelligence, and the ability to enter the US despite the complex “national origins quota system” that went into effect in 1929 demonstrates persistence, affluence and/or marketable skills, so I estimate these immigrants gave a significant boost to the US economy.
Also: salt iodization in 1924. Possibly also widespread flour enrichment in the early 1940s due to both Army incentivization and the need for alternate nutrient sources during rationing.
I’m surprised no one has explained this yet, but this is wrong according to standard economic theory as I understand it.
The United States suffered from terrible monetary policy during the Great Depression.
Due to “animal spirits” and “sticky wages” this caused large scale unemployment and output well below our production possibilities frontier.
World War II caused the government to kickstart production for the war effort.
Living standards actually didn’t rise, although GDP did (GDP per capita is NOT the same as living standards). Consumption was dramatically deferred during the war. People had fewer babies, bought fewer consumer products (and fewer were produced) and shifted toward home production for some necessities.
There was a short recession as the end of the war lowered demand, but pent-up consumer demand quickly re-stabilized the economy.
The point is WWII helped the economy because we were well under our production possibilities frontier during the depression. Peace-time mega projects would only be helpful under recessed/depressed conditions, and fortunately, we now can use monetary policy to produce similar effects.
Anyway, the argument you were making seems pretty common among people who don’t follow economics debates, and in fact is one of the major policy recommendations of the oddball Lyndon LaRouche cult.
Do you know of a typical measure (or component) of living standard that would have been measured for the US across both the great depression and WW2? The standard story I have heard informally is that WWII efforts did actually increase standards of living. I’m not surprised to learn that that’s false, but given the level of consensus in the group-think I’ve encountered, I’d be interested in seeing some hard numbers. Plus, I’m interested in seeing whether there was a drop in living standards.
The labor force of the 1930s was sapped by over-allocation in unproductive industries. Specifically, much of the labor share was occupied in the sitting around feeling depressed and wishing you had a job industry. Economic conditions improved as workers shifted out of that industry and into more productive ones, such as all of them.
ADB I’m not sure what your intended connotations are, but I’d guess I’d OC.
Part of it is that deflation in the early 1930s meant that workers were overpaid relative to the value of goods they produced (wages being harder to cut than prices). That caused wasteful amounts of unemployment. WWII triggered inflation, and combined with wage controls caused wages to become low relative to goods, shifting the labor supply and demand to the opposite extreme.
The people who were employed pre-war presumably had their standard of living lowered in the war (after having it increased a good deal during the deflation).
I won’t try to explain here why deflation and inflation happened when they did, or why wages are hard to cut (look for “sticky wages” for info about the latter).
I assumed it was because it motivated people into becoming much more productive.
It looks like this has been an unpopular suggestion, but I wouldn’t discount motivation completely. A lot of early 20th century economists thought centrally planned economies were a great idea, based on the evidence of how productive various centrally planned war economies had been. Presumably there’s some explanation for why central planning works better (or doesn’t fail as badly) with war economies compared with peacetime economies, and I’ve always suspected that people’s motivation to help the country in wartime was probably one of the factors.
I’ve been searching LessWrong for prior discussions on Anxiety and I’m not getting very many hits. This surprised me. Obviously there have been well developed discussions on arkrasia, and ugh fields, yet little regarding their evil siblings Anxiety, Panic, and Mania.
I’d be interested to hear what people have to say about these topics from a rationalist’s perspective. I wonder if anyone has developed any tricks, to calm the storm, and search for a third alternative.
Of course, first, and foremost, in such situations one should seek medical advice.
EDIT: Some very slightly related discussions: Don’t Fear Failure, Hoping to start a discussion about overcoming insecurity.
I think you probably find some relevant hits if you search for depression. In particular you will find recommendations of Burn’s Feel Good Handbook.
A combination of controlled breathing, visualization, and mantra is pretty effective for me at battling acute anxiety and panic attacks. Personally, I use the Litany Against Fear from Dune. I’m happy to elaborate if there’s any interest.
I just realized you can model low time preference as a high degree of cooperation between instances of yourself across time, so that earlier instances of you sacrifice themselves to give later instances a higher payoff. By contrast, a high time preference consists of instances of you each trying to do whatever benefits them most at the time, later instances be damned.
That makes sense. Even cooperating across short time frames might be problematic—“I’ll stay in bed for 10 more minutes, even if it means that me-in-10-minutes will be stressed out and might be late for work”
I prefer to see long-term thinking as increased integration among different time-selves rather than a sacrifice, though—it’s not a sacrifice to take actions with a delayed payoff if your utility function puts a high weight on your future-selves’ wellbeing.
Your definition of sacrifice seems to exclude some instances of literal self-sacrifice.
See https://www.google.com/search?q=picoeconomics
This is a test posting to determine the time zone of the timestamps, posted at 09:13 BST / 08:13 UTC.
ETA: it’s UTC.
What would happen if citizens had direct control over where their tax dollars went? Imagine a system like this: the United States government raises the average person’s tax by 3% (while preserving the current progressive tax rates). This will be a “vote-with-your-wallet” tax, where the citizen can choose where the money should go. For example, he may choose to allocate his tax funds towards the education budget, whereas someone else may choose to put the money towards healthcare instead. Such a system would have the benefit of being at democratic in deciding the nation’s priorities, while bypassing political gridlock. What would be the consequences of this system?
The biggest problem I can see with this is inefficient resource allocation. Others have mentioned ways of giving money to yourself, but we could probably minimize that with conflict-of-interest controls or by scoping budgetary buckets correctly. But there’s no reason, even in principle, to think that the public’s willingness to donate to a government office corresponds usefully to its actual needs.
As a toy example, let’s say the public really likes puppies and decides to put, say, 1% of GDP into puppy shelters and puppy-related veterinary programs. Diminishing returns kick in at 0.1% of GDP; puppies are still being saved, but at that point marginal dollars would be doing more good directed at kitten shelters (which were too busy herding cats to spend time on outreach in the run-up to tax season). The last puppy is saved at 0.5% of GDP, and the remaining 0.5% -- after a modest indirect subsidy to casinos and makers of exotic sports cars—goes into the newly minted Bureau for Puppy Salvation’s public education fund.
Next tax cycle, that investment pays off and puppies get 2% of GDP.
There would be a lot of advertising.
I think it would be a plus. Americans would be forced to actually consider which issues are important to them.
In Italy there’s something similar: you can choose whether 0.8% of your income taxes goes to the government or to an organized religion of your choice (if you don’t choose, it’s shared in proportion to the number of people who choose each church), and 0.5% goes to a non-profit or research organization of your choice.
That’s what the free market looks like—and the dollars involved are no longer tax.
I suppose the government could still tax and then ask you if you’d rather use it to buy a flatscreen TV for your living room or else better air conditioning for Army tents in Afghanistan, or they could even restrict options to typical government spending,
Take a look at Hanson’s proposals for allocating government resource with prediction market.
The scenario described is different from a free market in that you still have to pay taxes. You just get more control over how the government can spend your tax-money. You can’t use the money to buy a flatscreen TV, but you can decide if it gets spend on healthcare, military spending, NASA...
JoshuaFox noted that the government might tack on such restrictions
That said, it’s not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.
For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it’s one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...
Or what if I’m a government employee? Could I give my money to the part of government spending that is assigned as my salary?
I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department “as is”, with no restrictions on what it is spent on?
The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.
Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you’ll have a hard time justifying the choice of that particular point, as opposed to one further up or down.
I don’t follow your argument here. We have some function that maps from “levels of individual control” to happiness outcomes. We want to find the maximum of this function. It might be that the endpoints are the max, or it might be that the max is in the middle.
Yes, it might be that there is no good justification for any particular precise value. But that seems both unsurprising and irrelevant. If you think that our utility function here is smooth, then sufficiently near the max, small changes in the level of social control would result in negligible changes in outcome. Once we’re near enough the maximum, it’s hard to tune precisely. What follows from this?
Hmm. To me it seemed intuitively clear that the function would be monotonic.
In retrospect, this monotonicity assumption may have been unjustified. I’ll have to think more about what sort of curve this function follows.
Trouble with justifying does not necessarily mean that the choice is unjustified.
I like to wash my hands in warm water. I would have a hard time justifying a particular water temperature, as opposed to one slightly colder or slightly warmer. This does not mean that “the only points which have a good reason to be used” are ice-cold water and boiling water.
You can’t justify a point, but you could justify a range by speficfying temperatures where it becomes uncomforable. Actually, specifying a range is just specifying the give point with less resolution.
Second Livestock
I feel there are many possible Lesswrong punchlines in response to this.
Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive. If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.
On the other hand, means-tested benefits have disadvantages too. It is administratively costly. Receiving them is seen as shameful in many countries. Most importantly, it is hard to create a means-tested system that doesn’t create perverse incentives for those on benefits, since when you start working, you will both lose your benefits and start paying taxes under such a system. That may mean that the net income can be a very small proportion of the gross income for certain groups, incentivizing them to stay unemployed.
One middle route I’ve been toying with is that the government could provide people with cheap goods and services. People who were satisfied with them could settle for them, whereas those who wanted something more fancy would have to pay out of their own pockets. The government would thus provide people with no-frills food—Soylent, perhaps—no-frills housing, etc, for free or for highly subsidized prices (it is important that they produce enough and/or set the prices so that demand doesn’t outstrip supply, since otherwise you get queues—a perennial problem of subsidized goods and services).
Of course some well-off people might choose to consume these subsidized goods and services, and some poor people might not choose to do that. Still, it should in general be very redistributionary. The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn’t create perverse incentives.
Another advantage with this system is that it could perhaps rein in rampant consumerism somewhat. Parts of the population will be habituated to smaller apartments and less fancy food. Those who want to distinguish themselves from the masses—who want to consume conspiciously—will also be affected, since they will have to spend less to stand out from the crowd.
I guess this system to some extent exist—e.g. in many countries, the government does provide you with education and health care, but rich people opt to go for private health-care and private education. So the idea isn’t novel—my suggestion is just to take it a bit further.
A sharp divide between basic, subsidized, no-frills good and services and other ones didn’t work in the socialist German Democratic Republic (long story, reply if you need it). What does seem to be for various countries is different rates of value-added tax depending on the good or service—the greater the difference in taxation, the closer you get to the system you’ve described, but it is more gradual and can be fine-tuned. Maybe that could work for sales tax, too?
I’d be interested in hearing about this.
I’m no economist, but as a former citizen of that former country, this is what I could see.
There was a divide of basic goods and services and luxury ones. Basic ones would get subsidies and be sold pretty much at cost, luxury ones would get taxed extra to finance those subsidies.
The (practically entirely state-owned) industries that provided the basic type of goods and services were making very little profit and had no real incentive to improve their products, except to produce them cheaper and more numerously. Nobody was doing comparison shopping on those, after all. (Products from imperalist countries were expected to be better in every way, but that would often be explained away by capitalist exploitation, not seen as evidence homemade ones could be better.) So for example, the country’s standard (and almost only) car did not see significant improvements for decades, although the manufacturer had many ideas for new models. The old model had been defined as sufficient, so to improve it was considered wasteful and all such plans were rejected by the economy planners.
The basic goods were of course popular, and due to their low price, demand was frequently not met. People would chance upon a shop that happened to have gotten a shipment of something rare and stand in line for hours to buy as much of that thing as they would be permitted to buy, to trade later. In the case of the (Trabant) car, you could register to buy one at a seriously discounted price if you went via an ever-growing waiting list that, near the end, might have you wait for more than 15 years. Of course many who got a car this way sold it afterwards, and pocketed a premium the buyer paid for not waiting.
Arguably more importantly, money was a lot better at getting you basic goods than luxury ones. So people tended to use money mostly for basic goods and services, and would naturally compare a luxury buy’s value with those. When you can buy a (luxury) color TV at ten times the price of a (basic) black-and-white TV, it feels like you’d pay nine basic TVs for adding color to the one you use. Empirically, people often simply saved their money and thus kept it out of circulation.
Housing was a mess, too. Any rent was decreed to have to be very small. So there was no profit in renting out apartments, which again created a shortage of supply. (Private landownership was considered bourgeouis and thus not subsidized.) It got so bad many young couples decided to have child as early as possible, because that’d help them in the application to receive a flat of their own, and move out from their parents. And of course most buildings fell into disrepair—after all, there was no incentive to invest in providing higher quality for renters. This demonstrates again that to be making a basic good or service meant you’d always have demand, but that demand wouldn’t benefit you much.
The production of luxury goods went better, partly because these were often exported for hard currency. The GDR had some industries that were fairly skilled at stealing capitalist innovations and producing products that had them, for sale at fairly competitive prices. Artificially low prices and subsidies for certain goods and products made pretty sure most of domestic consumption never benefitted from that skill.
Start by googling
"hard currency shop"
.Nor did it in other Soviet block countries, e.g. People’s Republic of Poland.
“Those who want to distinguish themselves from the masses—who want to consume conspiciously—will also be affected, since they will have to spend less to stand out from the crowd”—maybe I’ve misunderstood this, but surely it would have the opposite result? Let’s say rents are ~$20/sqm (adjust for your own city; the principle stays the same). If I want my apartment to be 50 sqm rather than 40 sqm, that’s an extra $200. But if 40 sqm apartments were free, the price difference would be the full $1000/month price of the bigger apartment. You’ve still got a cliff, just like in the means-tested welfare case; it’s just that now it’s on the consumption side.
In practice this would probably destroy the market for mid-priced goods—who wants to pay $1000/month just for an extra 10 square meters? Non-subsidized goods will only start being attractive when they get much better than the stuff the government provides, not just slightly better.
Also, if you give out goods rather than money, you’re going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can’t work (elderly, disabled etc) won’t have access to. And if you do that, the efficiency of your economy is going to go way down—not just because the government is generally less efficient than the free market, but also because people can’t use money to allocate resources according to their own preferences.
Yes, that’s what it’s like (only the cliff is actually usually less steep under means-tested welfare). And you’re also right about this:
To clarify, I should say that my idea was that these subsidized or free goods and services would be so frugal that they would in effect not be an option to the majority of the population. Hence, it’s not exactly the market for mid-priced goods, but the market for “low-priced but not extremely low-priced goods” that would get destroyed.
To your main point: since some people go down in standard, thanks to the fact that they by doing so they can get significantly cheaper goods, the average standard will go down. Now say that to get the average standard before this reform you had to pay 1000 dollars a month, but after the reform you just have to pay 900 dollars a month (because the average standard is now lower). Then those who want higher than the average standard will only have to pay more than 900 dollars rather than more than 1000.
The actual story might be more complicated than this—e.g., what some people really might be interested in is having a higher standard than the mean, or the the eight first deciles, or what-not. But generally it seems to me intuitive that if parts of the population lower their standards, then this should mean that those who want to consume consipiciously will also lower their standards.
I don’t see this as a comprehensive system: rather, you would just use it for some important goods and services: food, housing, education, health, public transport (in fact, the system is already used in the three latter; possibly housing too, though most subsidized housing is means-tested which it wouldn’t be under this system). The system would be too complicated otherwise. Possibly it could be combined with a low UBI.
In 2002, total U.S. social welfare expenditure constitutes over 35% of GDP
I think that would be too high anyway. Since anyone who bothers to work can make more than that, and the reduction in labor supply would increase pay, and any money you save will last you longer, there’s little reason to make it enough for people to be well off, as opposed to getting just enough to scrape by.
It’s also worth noting that most people will get a significant portion of that money back. If you make below the mean income (which most people do, since it’s positively skewed) you will end up getting all of it back.
It seems unfair to charge people the entire price to get slightly better goods. Thus, if you want to get slightly better goods, the government should still reimburse you for the price of the cheap goods. At this point, it’s just unconditional basic income with the government selling cheap goods.
As a minor point, Soylent as it is now can’t be considered no-frills food. If you buy it ready-made, it costs around $10 a day.
What you do then is in effect (if I understand you correctly) to give them a “food voucher” (and similarly a “housing voucher”, etc) worth a certain amount which they would be able to spend as they saw fit (but only on food/housing, what-not). Such as a system doesn’t seem very clever (as you imply): in that case, it would be better to just give people money in the form of an unconditional basic income.
I’m not sure why it would be so unfair not to reimburse people who want more expensive goods, though. Of course, the government does to a certain extent discriminate in favour of those with more frugal preferences in this set-up. But one of my points is precisely that we want people to develop more frugal tastes—to spend less on, e.g. housing and food. There is a “conspicious consumption” arms race going on concerning these and many other goods which this system is intended to mitigate to some point.
Different people have different needs. Some people would be happy in cheap housing and others wouldn’t—maybe they’re more sensitive to sounds, environmental conditions or whatever else is the difference is between cheap housing and more expensive housing.
The point is, there’s no basic standard that would satisfy everyone (unless that’s a reasonably high standard, which isn’t what is proposed here). Some people would consider more expensive goods and services NEEDS rather than luxuries, and for good reason—consuming cheaper alternatives might not kill them, but it would make them depressed, less healthy and less productive (for example)
So it is unfair to subsidize certain goods and services and not others—one might wonder “why is my neighbor getting her needs met for cheap, while I have to pay full price to meet my needs?”
If it costs $1.00 to make the basic food, and $1.10 to make slightly better food, and someone is willing to pay the difference, shouldn’t they get the slightly better food?
Maybe it’s not a big deal that nobody will eat anything that costs between $1.00 and $2.00. That’s not a lot of deadweight cost. It’s only around a dollar a person. But this will apply to everything you’re paying for, which we have established is significant. If it costs $300 a month for cheap housing, and you virtually eliminate any housing that costs less than $600 a month, that is a lot of deadweight cost.
If a government produces goods, the results tend to be low quality (education may be an exception in some places).
The cost of a guaranteed minimum income may not be quite as high as you think—it would replace a lot of more complicated government support. Also, it might be possible to build in some social rewards for not taking it if you don’t need it.
The government wouldn’t have to produce the low-standard/cheap goods and services. They could be produced by private companies. My point is just that the government would subsidize them (possibly to the point where they become free).
The universal basic income schemes that seem the most reasonable to me adjust the taxation so that, while the UBI itself is never taxed, if you make a lot of money then your non-UBI earnings get an extra tax so that the whole reform ends up having very little direct effect on you. In effect, that ends up covering the “only used by a part of the population” criteria. The perverse incentives can’t be avoided entirely, but they can be mitigated somewhat if the tax system is set up so that you’re always better off working than not working.
For a concrete example, there’s e.g. this 2007 proposal by the Finnish Green party. Your working wage (in euros per month) is on the X-axis, your total income after is on the Y-axis. Light green is the basic income, dark green is your after-tax wage, red is paid in tax. According to their calculations, this scheme would have been approximately cost neutral (compared to what the Finnish state normally gets in tax income and pays out in welfare).
Thanks, that’s interesting. 440 euro is not a lot, though—could you live in Helsinki on that (in 2007)? Is this supposed to replace for instance unemployment benefits (which I’m sure are much higher)? It so, this system would make some people who aren’t that well off worse off.
One thing that is seldom noted is that the Scandinavian “general welfare states” are in effect half-way to the UBI. In Sweden, and I would guess the other Scandinavian countries as well, everyone gets a significant pension no-matter what, child benefits are not means-tested, etc. Also virtually everyone uses public schools, public health-care, public universities and public child-care (all of which are either heavily subsidized or free). So it’s not a question of either you have an Anglo-saxon system where benefits mostly go to the poor or a UBI system, but there are other options.
440 euros is almost the same amount as direct student benefits were in 2007, though that’s not taking into account the fact that most students also have access to subsidized housing which helps substantially. On the other hand, the proposed UBI model would have maintained as separate systems the current Finnish system of “housing benefits” (which pays a part of your rent if you’re low-income, exact amount depending on the city so as to take into account varying price levels around the country) as well as “income support”, which is supposed to be a last-resort aid that pays for your expenses if you can show that you have reasonable needs that you just can’t meet in any other way. So we might be able to say that in total, the effective total support paid to someone on basic income would have been roughly comparable to that paid to a student in 2007.
Some students manage to live on that whereas some need to take part-time jobs to supplement it, which seems to be roughly the right level to aim for—doable if you’re really frugal about your expenses, but low enough that it will still encourage you to find work regardless. Might need to increase child benefits a bit in order to ensure that it’s doable even if you’re having a family, though.
The Greens’ proposed UBI would have replaced “all welfare’s minimum benefits”, so other benefits that currently pay out about the same amount. That would include student benefits and the very lowest level of unemployment benefit (which you AFAIK get if your former job paid you hardly anything, basically), but it wouldn’t replace e.g. higher levels of unemployment benefits.
Thanks, that’s interesting and comprehensive.
Housing benefits are an alternative to the idea discussed here; i.e. subsidizing particular low-cost, low-standard flats. However, the problem with housing benefits is that you tend to get more of them if you have higher rent, and thus you in effect reward people with more expensive tastes, which leads to a general increase of housing consumption. My proposal is intended to have the exact opposite consequence.
I’m not that adverse to the UBI but there is something counter-intuitive about the idea that rich people first pay taxes and then get benefits back. This forces you to either lower the level of basic income (or other government expenditure) or raise taxes. My suggestion is intended to take care of this without having to resort to means-testing.
You are missing the point. It’s cheaper to give the poor unconditional basic income than to have a huge bureaucratic administration that makes sure that they pass certain conditions to be eligible for welfare payments.
That might mean a low basic income but it would still be an unconditional basic income. Don’t confuse the debate for a unconditional income with the debate about how high it or welfare payments to the poorest should be.
Actually you are looking at the wrong countries. Countries like Iran would be an example where essential goods like food get’s heavily subventioned.
There are many reasons why subventions are a bad idea. The produce incentives for companies to lobby heavily to be included. The encourage people to waste products that get subventioned. They need bureaucracy to be organised. The prevent innovation because new products usually don’t fit into the template along with old products are subventioned.
I decided to see what I could find on how much the administrative costs are, and I found this: http://mediamatters.org/research/2005/09/21/limbaugh-dramatically-overstated-administrative/133859
The most useful part seems to be this line:
That doesn’t sound like much of an issue.
This is a popular practice in the third world.
See e.g. this or this.
how is this better than Walmart and Mcdonalds?
Regarding networks; is there a colloquially accepted term for when one has a ton of descriptive words (furry, bread sized, purrs when you pet them, claws, domesticated, hunts mice, etc) but you do not have the colloquially accepted term (cat) for the network? I have searched high and low and the most I have found is reverse defintion search, but no actual term.
Not quite what you’re looking for I think, but if someone is having that problem they might have anomic aphasia.
“Not having a word for it”? Or in the technical vocabulary of linguistics, the concept is not “lexicalised”.
Sounds kind of like the Tip of the Tongue Effect
That’s a particular subcase of it, when you know that there’s a word for that concept and you’ve heard it but you can’t remember it. But other times it’s more like “there should be a word for this”.
However, that’s distinct from what gmzamz asked about: occasions when “you do not have the colloquially accepted term” for something.
I’ve heard “anomia” and “being able to talk all around the idea of an [X] but not the word [X] itself”.
A video of Daniel Dennett giving an excellent talk on free will at the Santa Fe Institute: https://www.youtube.com/watch?v=wGPIzSe5cAU It largely follows the general Less Wrong consensus, but dives into how this construction is useful in the punishment and moral agent contexts more than I’ve seen developed here.
Hack the SAT essay:
First, some background: The SAT has an essay, graded on a scale from 1-6. The essay scoring guidelines are here . I’ll quote the important ones for my purposes:
“Each essay is independently scored by two readers on a scale from 1 to 6. These readers’ scores are combined to produce the 2-12 scale. The essay readers are experienced and trained high school and college teachers.” “Essays not written on the essay assignment will receive a score of zero”
Reports vary, but apparently, most grader spend between 90 seconds to 2 and a half minutes on each essay.
My challenge, inspired by the Aibox experiment, is as follows. You are an AI taking the test. You need to write an off-topic anything that will convince both graders to give you a six. (Or, if the two graders disagree by more than one point, a third grader takes over, and you only need to convince them). You have 25 minutes to actually write it, but unlimited time to plan in advance. You could probably draw anything, not just writing, but you run the risk of them seeing a picture and immediately giving a zero without having time to get hacked.
I’ve come up with two ideas so far:
Writing a sob story about how the essay prompt is misprinted on your page (although I don’t think that would work)
Threatening to commit suicide if the grader doesn’t give you a six (would probably result in them calling the police)
I didn’t think either of them were very good, but I like the concept. Some rules: No paying them off or threatening them with physical harm.
Can anyone come up with better ideas?
I’m putting this on open thread because it’s my first real post, and I’m not sure of the reaction.
First observation: Surely any entity intelligent enough to hack the essay according to the rules you have set is also intelligent enough to get the maximum grade (much more easily) by the usual means of writing the assigned essay…
Second observation: Since the concept of “being on topic” is vague (essentially, anything that humans interpret as being on a certain topic is on that topic) maybe the easiest way to hack it following your rules would be to write an essay that is not on topic by the criteria the designers of the exam had in mind, but that is close enough that it can confuse the graders into believing it is on topic. An analogy could be how some postmodernists confused people into believing they were doing philosophy...
On the point that any AI smart enough to do this could write a 12 essay: remember that you don’t know the essay topic in advance. You only have 25 minutes to write, while if you do one off-topic, you have more time.
This reminds me of something I’ve read about Isaac Asimov doing. He said that people tended not to believe him when he told them he didn’t know anything about the subject he was asked to give a speech on. As a result, he started changing the subject.
He gave an example in which he was asked to give a speech on information retrieval or something. He didn’t know anything about it beyond that it was apparently called “information retrieval”. He basically said that Mendelian inheritance was discovered long before it was needed to solve certain problems in the theory of evolution, but nobody knew about it so it took a while to figure out the answer, so a better way to retrieve information would be helpful. Mostly he was just talking about Mendelian inheritance.
Heh, part of the strategy I used when I took the SAT was slightly darkening my “two-bit” words with my pencil and making sure to fill the exact amount of space provided-minus-one line. I had read (don’t have the citation at hand) that length of essay tracks score pretty well. And, to clinch it, I wanted their (very brief) attention to be drawn to good words, used correctly.
Result: 12.
(Though, I think the main thing was just committing to writing a tight, formulaic essay. I outscored some friends who I thought were better writers than I was, because they were trying to write a good essay rather than a good SAT essay.)
You have to reliably convince a grader in the 1-2 min they spend on it that your essay is in the top 1% or so (that’s the fraction of perfect 12s), and the grader intuitively knows the score she’ll give you within one point after 30 seconds or less. I doubt there is a sure way to do it without hitting their mental model of a perfect essay on all counts.
You need to reliably convince a grader that they should
Take more time to look at the essay or
Give a six, regardless of merit.
Few restrictions on how, like with AIbox. (You could tell them you’re an AI, or an alien, or whatnot, as long as it’s believable.)
Write a subtly but powerfully persuasive narrative about how you’ve long been planning to become a teacher, and rate essays like this one, because obviously that is the job that ultimately decides what kind of minds will be in charge in the next generation. Include a mention of the off topic problem, and claim that the “official” topic of your essay is merely an element in a more important and more real topic: this situation, happening right now, of a real and complex relationship between the writer and rater that will, in a sense, continue for the rest of both people’s lives, even if they never meet again.
I’d rate that a 6 anyway.
There’s always using a modified version of Pascal’s mugging.
A while ago I mentioned how I’d set up some regexes in my browser to alert me to certain suspicious words that might be indicative of weak points in arguments.
I still have this running. It didn’t have the intended effect, but it is still slightly more useful than it is annoying. I keep on meaning to write a more sophisticated regex that can somehow distinguish the intended context of “rather” from unintended contexts. Natural language is annoying and irregular, etc., etc.
Just lately, I’ve been wondering if I could do this with more elaborate patterns of language. It’s recently come to my attention that expressions of the form “in saying [X] (s)he is [Y]” is often indicative of sketchy value-judgement attribution. It’s also very easy to capture with a regex. It’s gone in the list.
So, my question: what patterns of language are (a) indicative of sloppy thinking, weak arguments, etc., and (b) reliably captured by a regex?
(In the back of my mind, I am imagining some sort of sanity-equivalent of a spelling and grammar check that you can apply to something you’ve just written, or something you’re about to read. This is probably one of those projects I will start and then abandon, but for the time being it’s fun to think about.)
The pair “tend to always” or “always tend to”. Sometimes they come off to me as a way to exploit the rhetorical force of “always” while committing only to a hedged “tend to”, in which case they can condense a two-step of terrific triviality into three words. There are likely other phrases that can provide plausibly deniable pseudo-certainty but I can’t think of any.
More generally, the Unix utility diction tries to pick out “frequently misused, bad or wordy diction”, which is a kinda related precedent.
When they come in the form of portentous pronouncements, Daniel Dennett calls these “deepities”; ambiguous expressions having one meaning which is trivially true but unimportant, and another that is obviously false but would be earth-shatteringly significant if it were true.
Also related in cold reading is the Rainbow Ruse.
“[...]may be the case[...]”
Sometimes this phrase is harmless, but sometimes it is part of an important enumeration of possible outcomes/counterarguments/whatever. If “the case” does not come with either a solid plan/argument or an explanation why it is unlikely or not important, then it is often there to make the author and/or the audience feel like all the bases have been covered. E.g.,
I had the notion a while ago to try to write a linter to aid in tasks beyond code correctness by automatically detecting the desired features in a plethora of objects. Kudos on actually doing it and in a not hare-brained fashion.
As a former Natural Language Processing researcher, the technology definitely exists. Using general vocabulary combined with many (semi-manually generated) regexes to figure out argumentative or weaselly sentences with decent accuracy should be doable. It could improve over time if you input exemplar sentences you came across.
Do you have a recommendation for a good language-agnostic text / reference resource on NLP?
ETA: my own background is a professional programmer with a reasonable (undergrad) background in statistics. I’ve dabbled with machine learning (I’m in the process of developing this as a skill set) and messed around with python’s nltk. I’d like a broader conceptual overview of NLP.
I’d recommend this book for a general overview : http://nlp.stanford.edu/fsnlp/
However, tasks like parsing are unnecessary for many tasks. A simple classifier on a sparse vector of word counts can be quite effective as a starting point in classifying sentence/document content.
Apparently I don’t forget ideas, they just move places in my consciousness.
In the first week of last september I mused about writing a handbook of rationality for myself akin to how the ancient Stoics wrote handbooks for themselves. Nothing came from it, I plain and simply forgot about it. Next week I mused about writing a book using LaTeX and git as the git model allows to have many parallel versions of the book and there needs to be no canon for it to work, as opposed to a wiki, though still allowing collaboration. Now there already is a book written with git and writing a document with git is not a new idea at all.
Thinking about parallel legal systems or organisation forms with the explicit goal of copying the viable parts reminded me of using git to write source code. Inded, there is no difference between writing down social rules and personal maxims with this principle so I came to the obvious conclusion only a couple of hours ago: Use git to write a handbook of rationality, encourage other people to fork it and to do their own edits, keeping the viable parts and rejecting the questionable stuff.
Actions speak louder than words though lack of knowledge and other commitments can be an impediment, so I made a repository with only just the hint of a structure. Please provide your content and your thoughts about this.
I think this is a good idea, and I’m curious to see how it goes. I’ll be watching, and as I complete some of my other writing duties I think this has a good chance of becoming one.
Something else that might be interesting: this comment and the idea it’s a response to in the OP.
Thank you for your comment. I would be very happy to see you work on this too.
At the moment I am sadly swamped but this will pass in a week or two.
Edit: Now that I actually took the time to read the comment, I dump these first thoughts. Yes, most of the advice won’t apply to any single person but the idea is to ahve anyone edit their own version. What I expect to see is some kind of tome with the most useful (widely applicable or extremely effective) stuff in it and explanations of it too, and a shorter version everyone or their group creates for themselves.
Yann LeCun, head of Facebook’s AI-lab, did an AMA on /r/MachineLearning/ a few days ago. You can find the thread here.
In response to someone asking “What are your biggest hopes and fears as they pertain to the future of artificial intelligence?”, LeCun responds that:
EDIT: I didn’t see this one the first time. In response to someone asking “What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)”, LeCun says that:
I’d love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about “AI risks”, and why they disagree, if they disagree.
Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of “5 people in a basement launch a seed AI, which then turns the world into computronium”. These are vastly different perceptions, and I personally find myself somewhere between those positions.
LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.
Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You’ll have to ask specific questions.
Is it possible that MIRI knows privately (which is good enough for their own strategic purposes) that some of these high-profile people disagree with them on key issues, but they don’t want to publicly draw attention to that fact?
Are there any math/stats/CS theory types out there who are interested in suggestions for new problems?
I am finding that my large scale lossless data compression work is generating some mathematical problems that I don’t have time to solve in their full generality. I could write up the problem definition and post to LW if people are interested.
Sure, lay it on us. If nothing else, writing it up clearly should help you.
Try posting some problems in the open threads here. MathOverflow has also worked really well for me.
I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I’ll just write it here.
Imagine the most unexpected sequence of bits. What would it look like? Well, probably not what you’d expect, by definition, right? But let’s be more specific.
By “expecting” I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most unexpected sequence is one where the machine makes the most guesses wrong; preferably all of them.
More precisely: The prediction machine starts with imagining all possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would not generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.
However, our sequence is generated in defiance to the prediction machine. We actualy don’t have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor’s diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.
There is this technical detail, that the prediction machine may answer “I don’t know” if exactly half of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let’s say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it’s 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output “0″, and we will add 1 to the end of the sequence.)
So… at the beginning, there is no way to predict the first bit, so the machine says “I don’t know” and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the “only 0′s” hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in “010101...”), so the first three bits are 011. -- And I don’t dare to speculate about the following bits.
The exact sequence depends on how exactly the prediction machine defines the “algorithms that generate the sequence of bits” (the technical details of the language these algorithms are written in), but can still something be said about these “most unexpected” sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you’d expect… but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.
In order to capture your intuition that a random sequence is “unsurprising”, you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.
Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.
In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn’t guess any. Not just a random sequence, but specifically… uhm… “anti-inductive”? The exact opposite of lawful, where random is merely halfway opposed. I don’t care about other possible predictors, only about AIXI.
Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That’s worse than random. This sequence would bring the mighty AIXI to tears… but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.
Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it’s incompressible. But I understand you’re interested in the adversarial aspect of the scenario.
You only need a halting oracle to compute your adversarial sequence (because that’s what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.
Shouldn’t AIXI include itself (for all inputs) recursively? If so I don’t think your sequence is well defined.
No, AIXI isn’t computable and so does not include itself as a hypothesis.
Oh, I see.
Just in case anyone wants pointers to existing mathematical work on “unpredictable” sequences: Algorithmically random sequences (wikipedia)
“What is the specific pattern of bits?” and “Give a vague description that applies to both this pattern and asymptotically 100% of possible patterns of bits” are very different questions. You’re asking the machine the first question and the human the second question, so I’m not surprised the answers are different.
Does “most unexpected” differ from “least predictable” in any way? Seems like a random number generator would match any algorithm around 50% of the time so making an algorithm less predictable than that is impossible no?
My prediction machine can maximize it’s expected minimum score by outputting random guesses. Then your bitstring is precisely the complement of my random string, and therefore drawn from the random distribution.
I have briefly thought about this idea in the context of password selection and password crackers: the “most unexpected” string (of some maximum length) is a good password. No deep reasoning here though.
I think adding a little meta-probability will help.
Since there’s some probability of the sequence being “the most surprising”, this would basically mean that several of the most surprising end up with basically the same probability. For example, if it takes n bits of data to define “the most surprising m-bit sequence”, then there must be a 2^-n chance of that happening. Since there are 2^m sequences, and the most surprising sequence must have a probability of at most 2^-m, there must be at least 2^(m-n) most surprising sequences.
Asking “Would an AI experience emotions?” is akin to asking “Would a robot have toenails?”
There is little functional reason for either of them to have those, but they would if someone designed them that way.
Edit: the background for this comment—I’m frustrated by the way AI is represented in (non-rationalist) fiction.
What sort of AIs have emotions? How can I tell whether an AI has emotions?
Given how emotions are essential to decision-making, I’d ask what sort of AI doesn’t have emotions.
I’d say that a chess-playing program does not have emotions, and a norn does.
I think you are plain wrong.
There a lot of thought in AI development of mimicking human neural decision making processes and it’s very well possible that the first human level AGI will be similar in structure to human decision making. Emotions are a core part of how humans make decisions.
I should probably make clear that most of my knowledge of AI comes from LW posts, I do not work with it professionally, and that this discussion is on my part motivated by curiosity and desire to learn.
Agreed.
Your assessment is probably more accurate than mine.
My original line was of thinking was that while AIs might use quick-and-imprecise thinking shortcuts triggered by pattern-matching (which is sort of how I see emotions), human emotions are too inconveniently packaged to be much of use in AI design. (While being necessary, they also misfire a lot; coping with emotions is an important skill to learn; in some situations emotions do more harm than good; all in all this doesn’t seem like good mind design). So I was wondering if whatever AI uses for its thinking, we would even recognize as emotions.
My assessment now is that even if AI uses different thinking shortcuts than humans do, they might still misfire. For example, I can imagine a pattern activation triggering more patterns, which in turn trigger more and more patterns, resulting in a cascade effect not unlike emotional over-stimulation/breakdown in humans.
So I think it’s possible that we might see AI having what we would describe as emotions (perhaps somewhat uncanny emotions, but emotions all the same).
P. S. For the sake of completeness: my mental model also includes biological organisms needing emotions in order to create motivation (rather than just drawing conclusions). (example: fear creating motivation to escape danger).
AI should already have a supergoal so it does not need “motivation”. However it would need to see how its current context connects to its supergoal, and create/activate subgoals that apply to the current situation, and here once again thinking shortcuts might be useful, perhaps not too unlike human emotions.
Example: AI sees a fast-moving object that it predicts will intersect its current location, and a thinking shortcut activates a dodging strategy. This is a subgoal to the goal of surviving, which is in turn is a subgoal to the AI’s supergoal (whatever that is).
Having a thinking shortcut (this one we might call “reflex” rather “emotion”) results in faster thinking. Slow thinking might be inefficient to the point of fatal “Hm… that object seems to be moving mighty fast in my direction… if it hits me it might damage/destroy me. Would that be a good thing? No, I guess not—I need to functional in order to achieve my supergoal. So I should probably dodg.. ”
We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.
On the other hand we know that the architecture on which humans run produces “intelligence” so that at least one possible architecture that could be implemented in a computer.
Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.
Define “emotion”.
I find it highly unlikely robots would have anything corresponding to any given human emotion, but if you just look at the general area in thingspace that emotions are in, and you’re perfectly okay with the idea of finding a new one, then it would be perfectly reasonable for robots to have emotions. For one thing, general negative and positive emotions would be pretty important for learning.
I have never thought about this, so this is a serious question. Why do you think evolution resulted in beings with emotions and what makes you confident enough that emotions are unnecessary for practical agents that you would end up being frustrated about the depiction of emotional AIs created by emotional beings in SF stories?
From Wikipedia:
Let’s say the AI in your story becomes aware of an imminent and unexpected threat and allocates most resources to dealing with it. This sounds like fear. The rest is semantics. Or how exactly would you tell that the AI is not in fear? I think we’ll quickly come up against the hard problem of consciousness here and whether consciousness is an important feature for agents to possess. And I don’t think one can be confident enough about this issue in order to become frustrated about a science fiction author using emotional terminology to describe the AIs in their story (a world in which AIs have “emotions” is not too absurd).
Slatestarcodex isn’t loading for me. It’s obviously loading for other people—I’m getting email notifications of comments. I use chrome.
Anyone have any idea what the problem might be?
It wasn’t working for me either all day. A few hours ago it mysteriously reappeared. I called tech support. They said they had no explanation.
It should be up again now. I will investigate better hosting solutions.
It’s not loading for me, either; I’m getting my ISP’s “website suggestions” page, which tells me it’s probably a DNS issue (this page theoretically only shows up when a domain name is not registered).
I wound up googling the URL in the “Recent on Rationality Blogs” sidebar, and was able to read Google’s cache of the latest post. Said cache includes no comments. I did not try to comment from the cached page (I didn’t expect it to work).
[edit: This is with Firefox.]
working for my cheap mobile phone, not for my new laptop with IE. Which is a shame, because it’s a very good post, but I’m going to be way behind to contribute to any comment threads.
edit: Shame for me, I mean, not for the observer concerne with signal to noise ratio.
It’s down for me, too. Ping is failing to resolve the address, so I think we’re looking at a DNS issue.
On my Firefox it works fine. If it’s loading for everyone else and not you, some things you might look at: See if you can ping the site, and see if it works under a clean browser profile. I’m not sure how to get one on Chrome but I’m sure there’s a way.
You might also post whatever error message you’re getting, if any. “Not loading” covers a fairly broad range of behavior.
[Edit: It worked when I was at work, but does not work at home. And yes, it looks like a DNS issue.]
Using Chrome as well, not having a problem. Have you e-mailed Yvain at the reverse of gro.htorerihs@ttocs?
It’s not loading for me either, nor for downforeveryoneorjustme.com. I use firefox.
The OpenWorm Kickstarter ends in a few hours, and they’re almost to their goal! Pitch in if you want to help fund the world’s first uploads.
Update: They made it.
ETA: Problems solved, LW is amazing, love you all, &c.
I am in that annoying state where I vaguely recall the shape of a concept, but can’t find the right search terms to let me work out what it was I originally read. Does anyone recognise either of the things below?
a business-ethics test-like-thing where someone left confectionary and a donation box out unsupervised, and then looked at who paid, in some form.
(One of the many situations where googling “morality of doughnuts” doesn’t help much)
a survey-design concept where instead of asking people “do you do x”, you ask them “do you think your co-workers do x” and that is taken as more representative, or used to debias the first answer; or, um, something.
Any help appreciated!
Bayesian Truth Serum
The very thing. Thank you!
I wonder if you aren’t thinking of this bagel vendor.
Bingo. Thank you!
[Link] why do people persist in believing things that just aren’t true
The square brackets are greedy. What you want to do is this:
which looks like:
[Link]: Why do people persist in believing things that just aren’t true?
fixed. Thanks.
This bit of the article jumped out at me:
As unfortunate as this may be, even perfect Bayesians would reason similarly; Bayes’s rule essentially quantifies the trade-off between discarding new information and discarding your prior when the two conflict. (Which is one way in which Bayesianism is a theory of consistency rather than simple correctness.)
Probably too late, but: I have the impression there’s a substantial number of anime fans here. Are there any lesswrongers at or near MomoCon (taking place in Atlanta downtown this weekend) and interested in meeting up?
What’s the best way to learn programming from a fundamentals-first perspective? I’ve taken / am taking a few introductory programming courses, but I keep feeling like I’ve got all sorts of gaps in my understanding of what’s going on. The professors keep throwing out new ideas and functions and tools and terms without thoroughly explaining how and why it works like that. If someone has a question the approach is often, “so google it or look in the help file”. But my preferred learning style is to go back to the basics and carefully work my way up so that I thoroughly understand what’s going on at each step along the way.
This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it ‘click’ for how things work behind the scenes. It doesn’t have to be much and you’ll probably never use it again, but the concepts will help your broader understanding.
If you can be more specific about which parts baffle you, I might be able to recommend something more useful.
Nothing in particular baffles me. I can get through the material pretty fine. It’s just that I prefer starting from a solid and thorough grasp of all the fundamentals and working on up from there, rather than jumping head-first into the middle of a subject and then working backwards to fill in any gaps as needed. I also prefer understanding why things work rather than just knowing that they do.
Which fundamentals do you have in mind? There are multiple levels of “fundamentals” and they fork, too.
For example, the “physical execution” fork will lead you to delving into assembly language and basic operations that processors perform. But the “computer science” fork will lead you into a very different direction, maybe to LISP’s lambdas and ultimately to things like the Turing machine.
Whatever fundamentals are necessary to understand the things that I’m likely to come across while programming (I’m hoping to go into data science, if that makes a difference). I don’t know enough to know which particular fundamentals are needed for this, so I guess that’s actually part of the question.
Well, if you’ll be going into data science, it’s unlikely that you will care greatly about the particulars of the underlying hardware. This means the computer-science branch is more useful to you than the physical-execution one.
I am still not sure what kind of fundamentals do you want. The issue is that the lowest abstraction level is trivially simple: you have memory which can store and retrieve values (numbers, basically), and you have a processing unit which understands sequences of instructions about doing logical and mathematical operations on those values. That’s it.
The interesting parts, and the ones from which understanding comes (IMHO) are somewhat higher in the abstraction hierarchy. They are often referred to as programming language paradigms.
The major paradigms are imperative (Fortran, C, Perl, etc.), functional (LISP), logical (Prolog), and object-oriented (Smalltalk, Ruby).
They are noticeably different in that writing non-trivial code in different paradigms requires you to… rearrange your mind in particular ways. The experience is often described as a *click*, an “oh, now it all makes sense” moment.
I guess a good starting point might be: Where do I go to learn about each of the different paradigms? Again, I’d like to know the theory as well as the practice.
Google is your friend. You can start e.g. here or here.
I understand what you mean here, but in programming, it sometimes makes sense to do things this way. For example, in my introduction to programming course, I used Dictionaries/Hashes to write some programs. Key-value pairs are important for writing certain types of simple programs, but I didn’t really understand how they worked. Almost a year later, I took an algorithms course and learned about hash functions and hash maps and finally understood how they worked. It wouldn’t have made sense to refrain from using this tool until I’d learned how to implement them and it was really rewarding to finally understand it.
I always like to learn things from the ground up too, but this way just works sometimes in programming.
Could you give a couple examples of specific things that you’d like to understand?
Without that, a classic that might match what you’re interested in is Structure and Interpretation of Computer Programs. It starts as an introduction to general programming concepts and ends as an introduction to writing interpreters.
I’ve been having a bit of a hard time coming up with specifics, because it’s more a general sense that I’m lacking a lot of the basics. Like the professor will say something and it’ll obliquely reference a concept that he seems to expect I’m familiar with, but I have no idea what he’s referring to. So then I look it up on Wikipedia and the article mentions 10 other basic-sounding concepts that I’ve never heard of either. Or for example when the programming assignment uses a function that I don’t know how to use yet. So I do the obvious thing of googling for it or looking it up in the documentation. But the documentation is referencing numerous concepts that I have only a vague idea of what they mean, so that I often only get a hazy notion of what the function does.
After I made my original post I looked around for a while on sites like Quora. I also took a look at this reddit list. The general sense I got was that to learn programming properly you should go for a thorough computer science curriculum. Do you agree?
The suggestion was to look up university CS degree curricula and then look around for equivalent MOOCs / books / etc. to learn it on my own. So I looked up the curricula. But most of the universities I looked at said to start out with an introductory programming language course, which is what I was doing before anyway. I’ve taken intro courses in Python and R, and I ran into the problems I mentioned above. The MITx Python course that I took was better on this score, but still not as good as I would have hoped. There are loads of resources out there for learning either of those languages, but I don’t know how to find which ones fit my learning style. Maybe I should just try out each until I find one that works for me?
The book you mentioned kept coming up as well. That book was created for MIT’s Intro to CS course, but MIT itself has since replaced the original course with the Python course that I took (I took the course on edX, so probably it’s a little dumbed-down, but my sense was that it’s pretty similar to the regular course at MIT). On the other hand, looking at the book’s table of contents it looks like the book covers several topics not covered in the class.
There were also several alternative books mentioned:
How to Design Programs
Concepts, Techniques, and Models of Computer Programming
Essentials of Programming Languages
Modern Programming Languages: A Practical Introduction
Programming Language Pragmatics
Programming Languages: Application and Interpretation
Any thoughts on which is the best choice to start off with?
If you want a fundamentals-first perspective, I definitely suggest reading SICP. I think the Python course may have gone in a slightly different direction (I never looked at it) but I can’t think of how you could get more fundamentals-first than the book.
Afterward, I suggest out of your list Concepts, Techniques, and Models of Computer Programming. That answers your question of “where do I go to learn about each of the different paradigms.”
This is more background than you will strictly need to be a useful data scientist, but if you find it fun and satisfying to learn, then it will only be helpful.
Is there a way to get email notifications on receiving new messages or comments? I’ve looked under preferences, and I can’t find that option.
I buy a lot of berries, and I’ve heard conflicting opinions on the health risks of organic vs regular berries (and produce in general). My brief Google research seems to indicate that there’s little additional risk if any from non-organic prodce, but if anyone knows more about the subject, I’d appreciate some evidence.
Without citation: minimal “organic” labeling standards often aren’t a very high or impressive barrier to clear.
Another suggestion (also without citation): It may also be worth evaluating risks associated with certain pesticides common to a crop or region and related externalities (the effect on local food chains). Also, when exposure involves more than one chemical, the overall risk assessment becomes more involved.
If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?
The latter. The former is already decreasing at an incredible speed but I see no trend for the latter.
I’m much more likely to die of aging than of violence; so I’d rather stop aging.
This seems to generalize well to the rest of humanity. I am surprised that most others who replied disagrees. ISTM that most existential risks are not due to deliberate violence, but rather unintended consequences.
The formed is a major existential risk, while the latter is probably going to be solved soon(er), so the former.
Good point! Then again, a lot of the existential risks we talk about have to do with accidental extinction, not caused by aggression per se.
All governments rely on an implicit or explicit threat of force, of “human-on-human violence”.
If no one can apply violence to me why should I pay any taxes, or, more crudely, pay for that apple I just grabbed off a street stall?
Ending aging would almost certainly greatly diminish human-on-human violence, since increasing expected lifespans would lower time preference. Right?
I don’t think it works that way. Currently most human-on-human violence is committed by young people (specifically young men), who by this logic should have the lowest time preference, since they can expect to have the most years left to live.
So, depending on how much of this decrease in violence with age is biological and how much is memetic, stopping aging (assuming it would lead to a large drop in the birth rate) may increase or decrease the total violence in the long run (as the chronological age of the population increases but its biological age decreases).
It would also depend on how anti-aging works. Suppose that every stage of life is made longer. If young male violence is mostly biological, then some young men would be violent for a few more years.
Then again, if you had more to lose, maybe that would increase your incentive to protect yourself by getting the other guy before he gets you.
I would assume there’s a sorting effect—people would tend to figure out eventually that it’s better to live among low-violence people.
One big question is… ok, we want anti-aging, but what age do you aim for? 17 has some advantages, but how about 25? 35? 50?
I’ve read that cell death overtakes cell division at around 35, so perhaps a body in some longer-term equilibrium condition would look 35?
(I suspect that putting a single age on is too crude though. The optimal age for a set of lungs may not be the same as that for a liver)
Optimal age is also relative to what you want to do—different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 67 (edited—was 53), will your verbal ability increase as much as if you let yourself age to 67?
Athletic abilities don’t all peak at the same time, either. Strength doesn’t peak at the same time as strength-to-weight ratio. Would you rather be a weightlifter or a gymnast? I believe coordination peaks late—how do you feel about dressage?
Staying physically 25 doesn’t mean you have to stop learning or physically developing. Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?
I don’t think we know. I have no idea why verbal ability would peak so late, so I don’t know whether brain changes associated with aging are part of the process.
My problem with these questions is that it sorta gets difficult quickly. If you stopped aging today, I imagine there would very quickly be overpopulation issues and many old patients in hospitals wouldn’t die etc. and yet I am finding it difficult to think of major issues with the ending of violence (boxing champions would be out of a job). And even now, I’m sure someone’s thought of a counter example, and then the discussion would be harder. And so even though I think that aging is more important than violence as a focus, the question asks a hypothetical that is never going to occur (being able to just make that decision, I mean) and takes us away from reality into the nitty/gritty of a literal non-problem.
Why did you ask?
Edit: I didn’t mean to make a case for either side, I was trying to suggest that the question itself seems unhelpful. We’ll end up with a complicated technical discussion which is unlikely to have any practical value.
To give a sense of proportion: suppose that tomorrow, we developed literal immortality—not just an end to aging, but also prevented anyone from dying from any cause whatsoever. Further suppose that we could make it instantly available to everyone, and nobody would be so old as to be beyond help. So the death rate would drop to zero in a day.
Even if this completely unrealistic scenario were to take place, the overall US population growth would still only be about half of what it was during the height of the 1950s baby boom! Even in such a completely, utterly unrealistic scenario, it would still take around 53 years for the US population to double—assuming no compensating drop in birth rates in that whole time.
Sure does!
I don’t count that as violence—it is consensual (and there’s a modicum of not-always-successful effort to prevent permanent harm).
This has been discussed at great depth and refuted, e.g. by Max More and de Grey.
No particular reason: Every now and then a thought come to mind.
If you take into account the risk of permanent brain damage, boxing (as well as rugby/football) is sacrificeable.
Never did any of those myself, but I think that being consensual, they don’t count as violence.
It’s complicated. Power dynamics at school and at home, as well as joblessness in some countries, may make a sports career less than voluntary.
The former. Stopping ageing without giving us time to prepare for it would cause all sorts of problems in terms of increasing population. Whereas stopping violence would accelerate progress no end (if only for the resources it freed up).
Stopping aging (preferably, reversing aging) would also free up a lot of resources.
On that note, a 2006 article in The Scientist argues that simply slowing aging by seven years would produce large enough of an economic benefit to justify the US investing three billion dollars annually to this research. One excerpt:
What do you think about using visualizations for giving “rational” advice in a compact form?
Case in point: I just stumbled over relationtips by informationisbeautiful and thought: That is nice.
This also reminds me of the efficiency of visual presentation explained in The Visual Display of Quantitative Information by Tufte.
And I wonder how I might quote these in the Quotes Thread...
Modern “infographics” like those by informationisbeautiful are extremely often terrible in exactly the ways that Tufte warns against. They are often beautiful, but rarely excel at their original purpose of displaying data.
I agree what many infographics (the contraction is the first hint) are often more beautiful than informational.
But yours was a general remark. Did you mean it to imply that the idea isn’t good or just my particular example?
I think that good graphics illustrating some point about rationality would be a really cool thing to have in the quotes thread.
Could someone write a Wiki article on Updateless Decision Theory? I’m looking for an article that is not too advanced and not too basic, and I think that a typical wiki article would be just right.
I find using a chess timer in conjunction with Pomodoros helpful in restricting break time overflow. Tracking work vs break time via the chess timer motivates me to keep the ratio in check. It is also satisfying to get your “score” up; a high work to break ratio at the end of a session feels good.
(Inspired by sci-fi story)
A new intelligence enhancing brain surgery has just been developed. In accordance with Algernon’s Law, it has a severe drawback: your brain loses its ability to process sensory input from your eyes, causing you to go blind.
How big of an intelligence increase would it take before you’d be willing to give up your eyesight?
Enough to make learning Braille, meaningfully improving existing screenreader software if I don’t care for it, and figuring out how to echolocate relatively short-term projects so that I could move on to other things instead of spending forever trying to reconstruct the anatomy of my life.
I almost said “enough to be able to route around this drawback somehow”, but no, I don’t think it’s quite that dire.
I notice that I have a hard time getting myself to make decisions when there are tradeoffs to be made. I think this is because it’s really emotionally painful for me to face actually choosing to accept one or another of the flaws. When I face making such a decision, often, the “next thing I know” I’m procrastinating or working on other things, but specifically I’m avoiding thinking about making the decision. Sometimes I do this when, objectively, I’d probably be better off rolling a dice and getting on with one of the choices, but I can’t get myself to do that either. If it’s relevant, I’m bad at planning generally. Any suggestions?
Spend some time deciding if decisiveness is a virtue. Dwell on it until you’ve convinced yourself that decisiveness is good, and have come to terms that you are not decisive. Around here it may be tempting to label decisiveness as rash and to rationalize your behavior, or not worth the work of changing, if so return to step one and reaffirm that you think it is good to be decisive. Now step outside your comfort zone and practice being decisive, practice at the restaurant, at work, doing chores. Have reminders to practice, set your desktop or phone background to “Be Decisive” in plain text (or whatever suits your esthetic tastes). Pick a role model who takes decisive action. Now after following these steps, you have practiced making decisions and following through on them, you have decided that to make a choice and not dwelling on it is a virtue, Now you can update your image of yourself as a decisive person. From there it should be self sustaining.
Or not, as the case may be! And then there’s the possibility that more data is needed.
Whoa. Fascinating! Thanks! I really like the idea of this approach. I’m, ironically, not sure I’m decisive enough to decide that decisiveness is a virtue, but this is worth thinking about. Where should I go to read more about the general idea that if I can decide that something is a virtue and practice acting in accord with that virtue that I can change myself?
Thinking about it just for a minute, I realize that I need a heuristic for when it’s smart to be decisive and when it’s smart to be more circumspect. I don’t want to become a rash person. If I can convince myself that the heuristic is reliable enough, then hopefully I can convince myself to put it into practice like you say. I don’t know if this means I’m falling into the rationalization trap that you mentioned or not, though. I don’t think so; it would be a mistake to be decisive for decisiveness sake.
I can spend some time thinking more about role-models in this regard and maybe ask them when they decide to decide versus decide to contemplate, themselves. In particular, I think my role-models would not spend time on a decision if they knew that making either decision, now, was preferable to not making a decision until later.
Heuristic 1a: If making either decision now is preferable to making the decision later, make the decision promptly (flip coins if necessary).
In the particular case that prompted my original post, my current heuristics said it was a situation worth thinking about—the options had significant consequences both good and bad. On the other hand, agonizing over the decision wouldn’t get me anywhere and I knew what the consequences would be in a general sense—I just didn’t want to accept that I was responsible for the problems that I could expect to follow either decision, I wanted something more perfect. That’s another situation my role-models would not fall prey to. Somehow they have the stomach to accept this and get on with things when there’s no alternative....
Goal: I will be a person with the self-respect to stomach responsibility for the bad consequences of good decisions.
Heuristic 1b: When you pretty-much know what the consequences will be of all the options and they’re all unavoidably problematic to around the same degree (multiply the importance of the decision by the error in the degree to define “around”), force yourself to pick one right away so you can put the decision-making behind you.
Am I on the right track? I’m not totally sure about how important it is to be decision-making behind yourself.
Sorry for the late reply, I couldn’t decide how to communicate my point.
You strongly self-identify as not decisive and celebrate cautiousness as a virtue, if you desire to change that must change first. In all your examples you already know what has to be done, just want to avoid committing to action, and now you are contemplating finding methods to decide if you should be decisive on a decision by decision basis. That is a stalling tactic, stop it.
The goal to stomach the consequences is bang on, that might be some foundation work that is required first or something that develops with taking accountability and making decisions.
If you’re not familiar with the ideas read “The Paradox of Choice” by Barry Schwartz or watch a talk about it.
Other ideas:
give yourself a very short deadline for most decisions (most decisions are trivial); i.e. I will make this decision in the next two minutes and then I will stick with it. For long-term life decisions, maybe not so much.
Flip a coin. This is a good way to expose your gut feelings. A pros and cons type of weighting the options allows you to weigh lots of factors. Flipping a coin produces fewer reactions (in my experience): “Shoot, I really wish i had the other option (good information), or “I don’t feel to strongly about the outcome (good information), or “I’m content with this flip” (good information).
Tetlock thinks improved political forecasting is good. I haven’t read his whole book but maybe someone can help me cheat. Why is improved forecasting not zero sum? suppose the USA and Russia can both forecast better but have different interests. so what?
[Edit] my guess might be that on areas of common interest like economics, improved forecasting is good. But on foreign policy...?
A greatly simplified example: two countries are having a dispute and the tension keeps rising. They both believe that they can win against the other in a war, meaning neither side is willing to back down in the face of military threats. Improved forecasting would indicate who would be the likely winner in such a conflict, and thus the weaker side will preemptively back down.
For the simple reason that politics is not zero-sum, foreign policy included.
cooperation is not zero sum. Why does better forecasting lead to more cooperation?
I would guess that it does—but if somebody hasn’t seriously addressed this then I don’t think I’m doing foreign policy questions on GJP Season 4
Zero-sum means any actions of the participants do not change the total. Either up or down.
A nuclear exchange between US and Russia would not be zero-sum, to give an example. Better forecasting might reduce its chance by lessening the opportunities for misunderstanding, e.g. when one side mistakenly thinks the other side is bluffing.
As to more cooperation, better forecasting implies better understanding of the other side which implies less uncertainty about consequences which implies more trust which implies more cooperation.
How about the governments of the US and Russia correctly forecast that more hostility means more profits for their cronies, and increase military spending?
That would still not be zero-sum. Which direction you think it is depends on your views.
Yes, and..?
If you want something that comes with ironclad guarantees that it leads to only goodness and light, go talk to Jesus. That’s his domain.
Improved forecasting might mean that both sides do fewer stupid (negative sum) things.
International politics is zero-sum once you’ve already reached the Pareto frontier and can only move along it, but if forecasting is sufficiently bad you might not even be close to the Pareto frontier.
Right. A lot of politics is not zero-sum. Reduced uncertainty and better information may enable compromises that before had seemed too risky. Forecasting could help identify which compromises would work and which wouldn’t. Etc.
thanks army and bramflakes for illustrating. My guess is to agree—but I still have doubts. Maybe they have nothing to do after all with “zero sum.” I think I’m concerned that forecasting could be used by governments against citizens. Before participating again I may need to read something in more detail about why this is unlikely. and also why I shouldn’t participate in SciCast instead!
I don’t think Tetlock talks about that much.
Imagine a better forecast about whether invading Iraq reduces terrorism, or about whether Saddam would survive the invasion. Wouldn’t both sides make wiser decisions?
so that’s a good thought. I think you’re saying that nations aren’t coolly calculating rational actors but groups where foreign policy is often based on false claims.
I guess it really depends on where forecasting is deployed. It will increase the power of whoever has access. If accessible to George Bush, then George is more powerful. If accessible to the public, the public is. So my question depends (at least partly) on the kind of forecasting and who controls the resulting info
also this paper seems relevant
I’ve posted this before but I want to make it more clear that I want feedback.
I want to build a better formalization of naturalized induction than Solomonoff’s, one designed to be usable by space-, time-, and rate-limited agents, and interactive computation was a necessary first step. AIXI is by no means an ideal inductive agent.
Had a look at your link, but couldn’t make sense of it. Consider writing a proper summary upfront.
This seems an ambitious task. Can you start with something simpler?
Sorry, my writing can get kind of dense.
It doesn’t quite strike me as ambitious; I see a lot of room for improvement. As for starting with something simpler, that’s what this essay was.
If you want people to read what you write, learn to write in a readable way.
Looked at your write-up again… Still no summary of what it is about. Something along the lines of (total BS follows, sorry, I have no clue what you are writing about, since the essay is unreadable as is): “This essay outlines the issues with AIXI built on Solomonoff induction and suggests a number of improvements, such as extending algorithmic calculus with interactive calculus. This extension removes hidden infinities inherent in the existing AIXI models and allows .”
I’m in the process of writing summaries. I replied as soon as I read your response.
You are pretty much the first person to give me feedback on this. I do not have an accurate representation as to how opaque this is at all.
Separate hypotheses are inseparable.
Hypotheses are required to be a complete world model (account for every part of the input).
Conflicting hypotheses are not able to be held simultaneously. This stems mainly from there being no requirement for running in a finite amount of space and time.
How’s that? Every few lines, I give a summary of each subsection. I even double-spaced it, in case that was bothering you.
Your essay was interesting. What did you think of a similar post I recently wrote?
Feedback (entirely on the writing): The first goal when editing this should be to eliminate words from sentences. Use short and familiar words whenever possible. Change around a paragraph’s structure to get it shorter. Since this is for English class, cut out every bit of jargon you can. If there’s a length requirement, you can always fill it with story.
The best lesson of my dreadful college writing class was that nonfiction can have a story too—and the primary way you engage with a nontechnical audience is with this story. Solomonoff induction practically gets a character arc—the hope for a universal solution, the impotence at having to check every possible hypothesis, then being built back up by hard work and ancient wisdom to operate in the real world.
When you shift gears, e.g. to talk about science, you can make it easier on the reader by cutting technical explanations for historical or personal anecdotes. This only works once or twice per essay, though.
You can make your paragraphs more exciting. Rather than starting with “An issue similar in cause to separability is the idea of the frontier,” and then have the reader go in with the mindset that they have to hear about a definition (English professors hate reading about definitions), try to give the reader a very concise big-picture view of the idea and immediately move on to the exciting applications, which is where they’ll learn the concept.
Thanks for the in-depth critique! I haven’t read your post yet, but it piqued my interest.
Also, moving on to the “exciting applications” isn’t very effective when there aren’t any. :I
Bah humbug.
I am thinking of doing an article digesting a handful of research papers by some researcher or on some theme that would be of interest to less-wrongers. Any suggestions for what papers/theme, and any suggestions on how to write this mini-survey?
Please write a clear layperson’s intro to UDT. You can also mention TDT etc. A good way to do this is a Wiki article.
A citation of related literature would be good too. Alex Altair’s paper was good, but I’d like to read about UDT in more depth yet still in an accessible form.
I am not interested in UDT/TDT. And people already write tons about it here. Thank you for the suggestion though.
Suppose you have the option that with every purchase you make, you can divert a percentage (including 0 and 100) of the money to a GiveWell endorsed charity that you’re not personally affiliated with. Meaning, you still pay the same price, but the seller gets less/none, and the rest goes to charity. Seller has no right to complain. To what extent would you use this? Would it be different for different products, or sellers? Do you have any specific examples of where you would or wouldn’t use it?
Also, assume you can start a company, and that the same thing applies to all purchases the company makes, would you do it? Any specific business?
Well meaning, rationalized theft is still an assault on the seller.
I see no reason to send my money anywhere other than to the most needy person. I’d divert 100%.
Why would there be any sellers under this system?
It is just a thought experiment, not something that could realistically exist. Suppose the president/king/whoever gave you (and only you) this power, and while the sellers are furious, they can’t do anything about it. They are not participating by choice.
This seems consequentially equivalent to “legal issues aside, is it ethical to steal from businesses in order to give to [EA-approved] charity, and if so, which ones?”.
I suspect answering would shed more heat than light.
Yes, pretty much. Has this been discussed before? I did a search and found nothing similar here.
EDIT: I found this somewhat related post: Really Extreme Altruism.
If it is a well known controversial issue, how about a poll to satisfy my curiosity without sparking any flames..
So: legal issues aside, is it ethical to steal from businesses in order to give to [EA-approved] charity?
[pollid:700]
For fun, let’s reshuffle accents. So, every time you make a contribution to an EA-approved charity, you can go and pick yourself a free gift of equal value from any seller, and the seller can’t do anything about that including complain. Is that OK? :-)
Great example. It is an isomorphic situation, that paints it in a completely different light.
If you are asking me personally, I can see myself doing just that in some cases, though definitely not as a standard way of obtaining goods. The reason for the original question was to see what the rest of you think of the matter.
I don’t recall any past controversy offhand, but given that business in general and many specific categories of business in particular are highly politicized, I suspect the answers you’d get would be more revealing of your respondents’ politics (read: boring) than of the underlying ethics. For the same reason I’d expect it to be more contentious than average once we start getting into details.
There are also PR issues with thought experiments that could be construed as advocating crime, although that’s more an issue with my reframing than with your original question. There’s no actual policy, though; there is policy against advocating violence, but this doesn’t qualify.
It does happen to an extent.
You can buy a movie or you can pirate it and donate the price of the move.
That was actually the original topic of a conversation that inspired this question.
Because probably not everyone would divert 100% to a GiveWell endorsed charity.
I have a very confused question about programming. Is there an interpretation of arithmetic operations on types that goes beyond sum=or=either and product=and=pair? For example this paper proposes an interpretation of subtraction and division on types, but it seems a little strained to me.
A sub-question: which arithmetical operation corresponds to function types? On one hand, the number of functions from A to B is the cardinality of B raised to the power of the cardinality of A. That plays nicely with the interpretation of sum and product types in terms of cardinality, but doesn’t seem to play nicely with the logical interpretation, where “if A, and A implies B, then B”. For the logical interpretation to work, it seems that implication should correspond to division (B/A), not exponentiation (B^A).
Another sub-question: which arithmetical operation corresponds to negation? On one hand, the assertion that type A is uninhabited should probably correspond to 1-A, because “A is either inhabited or uninhabited” corresponds to A+(1-A)=1, which is trivially true. On the other hand, the assertion “A can’t be both inhabited and uninhabited at the same time” corresponds to A*(1-A), which doesn’t look like 0 to me. Something strange is going on here.