I know how egoistic this comment risks sounding, but: many different people (at least half a dozen) have independently expressed to me that they find the links that I post on social media to be consistently interesting and valuable, to the point of one person claiming that about 40% of the value that she got out of Facebook was from reading my posts.
Thus, if you’re not already doing so, you may be interested in following me on social media, either on Facebook or Google Plus. I’m a little picky about accepting friend requests on FB, but anyone is free to follow me there. If you don’t want to be on any of those services, it’s apparently also possible to get an RSS feed of the G+ posts. (I also have a Twitter account, but I use that one a lot less.)
On the other hand, if you’re procrastination-prone, you may want to avoid following me—I’ve also had two people mention that they’ve at least considered unfollowing me because they waste too much time reading my links.
Most Interesting quote I found in first 5 minutes of browsing your G+ feed:
Unfortunately, the bubble was to burst once again, following a series of attacks on connectionism’s representational capabilities and lack of grounding. Connectionist models were criticized for being incapable of capturing the compositionality and productivity characteristic of language processing and other cognitive representations (Fodor & Pylyshyn 1988); for being too opaque (e.g., in the distribution and dynamics of their weights) to offer insight into their own operation, much less that of the brain (Smolensky 1988); and for using learning rules that are biologically implausible and amount to little more than a generalized regression (Crick 1989). The theoretical position underlying connectionism was thus reduced to the vague claim that that the brain can learn through feedback to predict its environment, without a psychological explanation being offered of how it does so. As before, once the excitement over computational power was tempered, the shortage of theoretical substance was exposed.
“One reason that research in connectionism suffered such setbacks is that, although there were undeniably important theoretical contributions made during this time, overall there was insufficient critical evaluation of the nature and validity of the psychological claims underlying the approach. During the initial explosions of connectionist research, not enough effort was spent asking what it would mean for the brain to be fundamentally governed by distributed representations and tuning of association strengths, or which possible specific assumptions within this framework were most consistent with the data. Consequently, when the limitations of the metaphor were brought to light, the field was not prepared with an adequate answer. On the other hand, pointing out the shortcomings of the approach (e.g., Marcus 1998; Pinker & Prince 1988) was productive in the long run, because it focused research on the hard problems. Over the last two decades, attempts to answer these criticisms have led to numerous innovative approaches to computational problems such as object binding (Hummel & Biederman 1992), structured representation (Pollack 1990), recurrent dynamics (Elman 1990), and executive control (e.g., Miller & Cohen 2001; Rougier et al. 2005). At the same time, integration with knowledge of anatomy and physiology has led to much more biologically realistic networks capable of predicting neurological, pharmacological, and lesion data (e.g., Boucher et al. 2007; Frank et al. 2004). As a result, connectionist modeling of cognition has a much firmer grounding than before.”
-- Matt Jones & Bradley C. Love, Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.
I would love more detailed/referenced high-level analyses of different approaches to AI (e.g. connectionism v. computationalism v. WBE).
I thought this was an excellent quote from his newsfeed and that it was good evidence that his feed was worth reading. Then, I indirectly asked if he had any similar links/resources, since I thought the quote was so good.
But, really? Is this the most interesting quote you could find in Kaj’s thread? Your quote is long, dry, super-technical, and maybe interesting only to experts. You might argue that the last part carries the general insight that criticism helps the development of new ideas, but it’s still too dense.
To illustrate my point, let me pick a semi-random (I scrolled a bunch randomly and picked without reading) quote from his thread:
Fear of success. At its root this is a fear of change. If I succeed in the thing I am setting out to do, what then? What if I actually become the person I wish to become, who am I? My solution to this was to set up my school and my training in such a way that success was impossible. There is no end goal or end result. There is only process. My mission in life is deliberately unattainable: to restore our European martial heritage to its rightful place at the heart of European culture. Of course that cannot be achieved alone, and there is no reasonable expectation of it being accomplished in my lifetime. There is no question that European martial arts have come a long way in the last decade or so, and my work has been a part of that, but another excellent aspect to this goal is even if we could say it was accomplished in my lifetime, nobody would ever suggest that I did it. So fear of success is not a problem, as success is impossible.
I don’t question that what you quoted would’ve been very interesting for you, but I suspect you’re an expert (or an experienced amateur at least), and I think you underestimated inferential distances.
Thanks! Mind Projection Fallacy on my part. I’m currently trying to pick a topic for my Master’s thesis, and high level overviews of AI-related are very interesting to me.
Likewise, I don’t think that quote is particularly interesting—mainly because I don’t see how I could use it to change my behavior/strategy to achieve my goals.
In summary, Kaj’s feed has interesting information on a wide variety of topics, a subset of which will probably be interesting to many of the people reading this.
Here is the abstract of a paper in Neuroscience Letters. The paper is titled “Early sexual experience alters voluntary alcohol intake in adulthood”.
And the abstract goes
Steroid hormones signaling before and after birth sexually differentiates neuronal circuitry. Additionally, steroid hormones released during adolescence can also have long lasting effects on adult behavior and neuronal circuitry. As adolescence is a critical period for the organization of the nervous system by steroid hormones it may also be a sensitive period for the effects of social experience on adult phenotype. Our previous study indicated that early adolescent sexual activity altered mood and prefrontal cortical morphology but to a much smaller extent if the sexual experience happened in late adolescence. In humans, both substance abuse disorders and mood disorders greatly increase during adolescence. An association among both age of first sexual activity and age of puberty with both mood and substance disorders has been reported with alcohol being the most commonly abused drug in this population. The goal of this experiment was do determine whether sexual experience early in adolescent development would have enduring effects on adult affective and drug-seeking behavior.
. . . . . . and the abstract continues
Compared to sexually inexperienced HAMSTERS and those that experienced sex for the first time in adulthood, animals that mated at 40 days of age and were tested either 40 or 80 days later significantly increased depressive- but not anxiety-like behaviors and increased self-administration of saccharine-sweetened ethanol. The results of this study suggest that an isolated, though highly relevant, social experience during adolescence can significantly alter depressive-like behavior and alcohol self-administration in adulthood.
I propose that from now on the titles of all papers about physiology and psychology should be read with ”...in hamsters” appended to them.
How do you organize your computer files? How do you maintain organization of your computer files? Anyone have any tips or best practices for computer file organization?
I’ve recently started formalizing my computer file organization. For years my computer file organization would have been best described as ad-hoc and short-sighted. Even now, after trying to clean up the mess, when I look at some directories from 5 or more years ago I have a very hard time telling what separates two different versions of the same directory. I rarely left README like files explaining what’s what, mostly because I didn’t think about it.
Here are a few things I’ve learned:
Decide on a reasonable directory structure and iterate towards a better one. I can’t anticipate how my needs would be better served by a different structure in the future, so I don’t try that hard to. I can create new directories and move things around as needed. My current home directory is roughly structured into the following directories: backups, classes, logs, misc (financial info, etc.), music, notes, projects (old projects the preceded my use of version control), reference, svn, temp (files awaiting organization, mostly because I couldn’t immediately think of an appropriate place for them), utils (local executable utilities).
Symbolic links are necessary when you think a file might fit well in two places in a hierarchy. I don’t care too much about making a consistent rule about where to put the actual file.
Version control allows you to synchronize files across different computers, share them with others, track changes, roll back to older versions (where you can know what changed based on what you wrote in the log), and encourages good habits (e.g., documenting changes in each revision). I use version control for most of my current projects, even those that do not involve programming (e.g., my notes repository is about 700 text files). I don’t think which version control system you use is that important, though some (e.g., cvs) are worse than others. I use Subversion because it’s simple.
I store papers, books, and other writings that I keep in a directory named reference. I try to keep a consistent file naming scheme: Author_Year_JournalAbbreviation.pdf. I have a text file that lists my own journal abbreviation conventions. If the file is not from a journal, I’ll use something like “chapter” or “book” as appropriate. (Other people use softwares like Zotero or Mendeley for this purpose. I have Zotero, but mostly use it for citation management because I find it to be inconvenient to use.)
In terms of naming files, I try to think about how I’d find the file in the future and try to make it obvious if I navigate to the file or search for it. For PDFs, you often can’t search the text, so perhaps my file naming convention should include the paper title to help with searching.
README files explaining things in a directory are often very helpful, especially after returning to a project after several years. Try to anticipate what you might not remember about a project several years disconnected from it.
Synchronizing files across different computers seems to encourage me to make sure the directory structure makes at least some sense. My main motivation in cleaning things up was to make synchronizing files easier. I use rsync; another popular option is Dropbox.
Using scripts to help maintain your files is enormously helpful. My goals are to have descriptive file names, to have correct permissions (important for security; I’ve found that files that touched a Windows system often have completely wrong permissions), to minimize disk space used, and to interact well with other computers. I have a script that I titled “flint” (file system lint) that does the following and more:
checks for duplicate files, sorting them by file size (fdupes doesn’t do that; my script is pretty crude and not yet worth sharing)
scans for Windows viruses
checks for files with bad permissions (777, can’t be written to, can’t be read, executable when it shouldn’t be, etc.)
deletes unneeded files, mostly from other filesystems (.DS_Store, Thumbs.db, Desktop.ini, .bak and .asv files where the original exists, core dumps, etc.)
checks for nondescriptive file names (e.g., New Folder, untitled, etc.)
checks for broken symbolic links
lists the largest files on my computer
lists the most common filenames on my computer
lists empty directories and empty files
I’d be very interested in any other tips, as I often find my computer file organization to be a bottleneck in my productivity.
This is set to auto-delete everything in it weekly. I had a chronic problem where small files that were useful for some minor task or another from months or years ago would clutter up everything. This was my “elegant” solution to the problem and it’s served me well for years, because it gave me an actual incentive to put my finished work in a sensible place.
Although now that I think about it, it would be a better idea for it to only delete files that haven’t been touched for a week, rather than wiping everything all at once on a Saturday ..
Although now that I think about it, it would be a better idea for it to only delete files that haven’t been touched for a week, rather than wiping everything all at once on a Saturday ..
The Linux program tmpreaper will do this. It can be made into a cron job. I’ve got mine set for 30 days.
If you’re comfortable with command-line UIs, git-annex is worth a look for creating repositories of large static files (music, photos, pdfs) you sync between several computers.
I use regular git for pretty much anything I create myself, since I get mirroring and backups from it. Though it’s mostly text, not audio or video. Large files that you change a lot probably need a different backup solution. I’ve been trying out Obnam as an actual backup system. Also bought an account at an off-site shell provider that also provides space for backups.
Use the same naming scheme for your reference article names and the BibTeX identifiers for them, if you’re writing up some academic research.
GdMap or WinDirStat are great for getting a visualization of what’s taking space on a drive.
If your computer ever gets stolen, you probably want it to have had a full-disk encryption. That way it’s only a financial loss, and probably not a digital security breach.
It constantly fascinates me that you can name the exact contents of a file pretty much unambiguously with something like a SHA256 hash of it, but I haven’t found much actual use for this yet. I keep envisioning schemes where your last-resort backup of your media archive is just a list of file names and content hashes, and if you lose your copies you can just use a cloud service to retrieve new files with those hashes. (These of course need to be files that you can reasonably assume other people will have bit-to-bit equal copies of.) Unfortunately, there don’t seem to be very robust and comprehensive hash-based search and download engines yet.
I keep envisioning schemes where your last-resort backup of your media archive is just a list of file names and content hashes, and if you lose your copies you can just use a cloud service to retrieve new files with those hashes.
They probably know about it already. I think the eDonkey network is pretty much what I envision. The problem is that the network needs to be very comprehensive and long-lived to be a reliable solution that can be actually expected to find someone’s copy of most of the obscure downloads you want to hang on to, and things that people try to sue into oblivion whenever they get too big have trouble being either. There’s also the matter of agreeing on the hash function to use, since hash functions come with a shelf-life. A system made in the 90s that uses the MD5 function might be vulnerable nowadays to a bot attack substituting garbage for the known hashes using hash collision attacks. (eDonkey uses MD4, which seems to be similarly vulnerable to MD5.)
There probably are parts of the problem that are cultural instead of technical though. People aren’t in the mindset of wanting to have their media archive as a tiny hash metadata master list with the actual files treated as cached representations, so there isn’t demand and network effect potential for a widely used system accomplishing that.
Use the same naming scheme for your reference article names and the BibTeX identifiers for them, if you’re writing up some academic research.
This is very smart, and I’ll look into changing my bibiligraphy files appropriately.
If your computer ever gets stolen, you probably want it to have had a full-disk encryption. That way it’s only a financial loss, and probably not a digital security breach.
I want to reiterate the importance of this. I’ve used full-disk encryption for years for the security advantages, and I’ve found the disadvantages to be pretty negligible. The worst problem with it that I’ve had was trying to chroot into my computer, but you just have to mount everything manually. Not a big deal once you know how to do it.
My wording was unclear. I sort the list of duplicate files by file size, e.g., the list might be like 17159: file1, file2; 958: file3, file4. This is useful because I have a huge number of small duplicate files and I don’t mind them too much.
My first reflex is to exclaim that I don’t organize my files in any way, but that is incorrect: I merely lack comphrehension of how my filing system works, it’s inconsistent, patchy and arbitrary, but I do have some sort of automatic filing system which feels “right” and when my files are not in this system my computer feels “wrong”.
I wouldn’t reccommend duplicating my filesystem (it’s most likely less useful than most filing systems which aren’t “throw everything in one folder/on desktop and forget about it”) but I’ll note some key features:
Files reside inside folder trees of which the folders are either named clearly as what they are, or in obsuficating special words or made up phrases (even acronyms) which have only special meaning to me in the context of that paticular position in the file tree.
Different types of files have seperate folders in places
Folder trees are arranged in sets of categories, sub categories and filetypes (the order of sorting is very ad-hoc and arbitrary) you could have for example:
Media > Type of media > genre of media > Creator > Work
but it could just as easily have Creator at the root of the tree.
I really suggest you just make your own system or copy someone else’s; it will more likely than not provide more utility.
Edit: just to be clear I don’t have any sort of automated software which organizes my files for me, I am merely saying that my
mind organizes the files semiconciously so I’m not directly “driving” when the act of organizing occurs
The only thing that’s worked for me in the long term is making things public on the internet. This generally means putting it on my website, though code goes on github and shared-editing things go in Google Docs. Everything else older than a couple years is either gone or not somewhere I can find anymore.
For the last couple of years I have used Google drive exclusively for all new documents and am finding it works pretty well. I use a simple folder structure which makes it a bit easier when you want to browse docs, though the search obviously works really well.
root - random docs, works in progress, other
|-- AI—AI notes, research papers, ebooks
|-- dev - used as a dumping ground for code when transferring between PCs (my master dev folder lives on the PC)
|-- study -course notes, lectures
The best part is that I can access them from home, work or on the road (android app works very well), so backups and syncing is not an issue.
For files on the home PC I use a NAS which is pretty amazing, and allows access from any home PC or tablet/phone via a mapped drive. The folder structure there is:
|-- photos—all pictures, photos, videos,
|-- dev—master location for all source code
|-- docs—master location for all documents older than 2 years (rest is on google drive)
|-- info—with lots of subfolders, any downloaded ebook, webpage, dataset that I didn’t create
I don’t use the clients but I am annoyed there isn’t a simple way to download all google docs to the computer in RTF/Word or even text format—you can do full backups , but it they only work with Google drive. I don’t think Google will go out of business any day soon, so it is not an imminent risk at this stage.
Seems that tools like Google Drive take care of many issues you describe. Directory structure and symlinks are superseded by labels, version control is built-in, search is built-in and painless, synchronization is built-in, no viruses to worry about, etc.
That does sound nice. I wasn’t aware of the version control, and I’m somewhat curious how that would work. Thinking about it, I’d prefer the manual approach Subversion requires where I can enter a message each time. After doing a few searches, I’m not sure you can even get anything similar to a commit message in Google Drive. The commit messages I’ve found to be essential in decoding what separates older versions of files from newer ones.
There are some more practical issues for me. I run Linux. There’s no official Google Drive client for Linux, and last I checked the clients that exist aren’t good. I also sometimes work at a government science lab. They don’t allow any sort of cloud file synchronization software aside from their own version of SkyDrive, which requires me to log in via a VPN (and is a total pain). No idea if SkyDrive works on Linux, anyway. They don’t seem to be aware of rsync, thankfully. :-)
Every couple of weeks, Google Drive chooses an important document to lock me out of editing. This pretty much eliminates it as a serious solution for file management for me.
Being on my way to friends and thinking about living in the city vs. living on the outskirts I had a thought: Though property prices in cities are higher anything else is much closer: Restaurants, shopping, ideally friends, and any public transport. This means that I spend much less time just getting around and commuting. Also I save some amount on heating as single houses necessarily are more difficult to heat.
So on one hand I spend more on rent but on the other hand I save on time, energy and transportation. So the “actual” cost of living in the city is lower than it might seem at first. Has anyone done an estimation of this “actual” cost or should I do it myself as kind of an exercise? I am aware that there are quite some parameters to consider such as personal preferences on having parks nearby, noise levels and my desire to go out.
If you live in a city, you can (and probably should) get away with not owning a car. Not only is it unnecessary to get where you want to go, but due to property prices, parking is a gigantic hassle and expense. Walking works well for anything within a mile, biking for anything within about 5, public transit or a cab for the metro area, and car rentals (or borrowing a friend’s) can fill in for anything else that absolutely requires your own vehicle.
Not owning a car saves a significant amount of time and money and makes the math better for living in a more built-up area.
I’ve lived car-free for several years now and I think it’s one of the best choices I’ve ever made. I’m saving a lot of money, staying in great shape from biking, and avoiding a stressful commute. Most people think I’m an eccentric for this, but I’m okay with that.
True, some cities are much better built for that sort of thing than others. I had San Francisco, Seattle, New York City, and Valencia in mind specifically—less so Los Angeles and Dallas-Fort Worth.
Agreed with the lifestyle part, though—it’s really a question of how often you need to do things that require a car, and how expensive the next-best option is (taxi, car rental, ride-share, borrowing your neighbor’s). If you want to drive three hours to see your Mom every weekend, you probably don’t want to sell your car.
I would guess that it depends a lot of the particular city that you are talking about. That means it would be good if you make the estimation yourself.
Says more about the power of investment over 35 years than it does about bicycles, really.
I haven’t taken more than a glance over the grandparent’s calculator, but this shouldn’t be too hard to estimate. The Edmonds TCO calculator gives the five-year cost of ownership for a two-year-old base-spec Toyota Camry (as generic a car as I can think of) at $37,196, inclusive of gas, maintenance, etc. Assuming you buy an equivalent car every five years, that comes out to a monthly cost of $619. If you instead invested that money at a five-percent rate of return, then after 35 years of contributions, the resulting fund ends up being worth a hair over $700,000 -- not enough to fit the “millionaire” tag, but close.
There are plenty of less obvious costs associated with riding a bike instead of driving a car, of course—and some less obvious benefits. But the moral of the story is obviously “invest your money”.
This is a misleading calculation, since it assumes that the car has zero economic value over and above the bicycle. Whereas in fact there’s a very large value for many people, in terms of being able to move heavy things, go on vacations, get to and from work etc etc.
I disagree. Little of the value you mention is contingent upon owning a car. Renting cars as needed is very useful for those who don’t own cars (this is included in the calculator that sparked this discussion, in fact). It also offers a few other advantages: the flexibility of choosing a vehicle most appropriate for a task (e.g., Need more space? Get a truck.), the convenience of not doing maintenance, and the satisfaction of using a new car nearly every time.
Unless your job or other things in your life require you to move heavy objects frequently, not owning a car and renting is likely cheaper. Compare the costs of merely owning a car (thousands of dollars) to those of renting a truck from a local home improvement store (around $25/hour). I recently moved 1500 miles in a rented van. It was cheaper than using a car available to me for free with a trailer (the gas mileage made the difference), and offered a similar amount of space.
Vacations are easily done in rented cars in my experience.
You could make a good case that cars are useful for getting to and from work, but given the biases in thinking about commuting, this case is perhaps less strong than you imagine. Having tried driving, transit (bus and subway), carpooling, and cycling for my commute at different times in my life, I strongly prefer cycling for the cost, health benefits, low stress, and convenience. What you find works depends on too many things to list. (I have also read of people using rented cars for commuting, but I’m skeptical this works well.)
I think the best case for owning cars may be the convenience of picking up and dropping off of children. This seems to be the sticking point for many car-free individuals. Thankfully I’m a mild antinatalist, so this does not concern me.
In 4 years of not owning a car I have never found myself wishing I owned one. I can get all of the benefits of ownership that I care about at far lower costs.
This is a good point, and it’s worth noting that even making a reservation in advance might not guarantee what you want. When I was moving, Hertz accidentally rented the van my father and I reserved in advance to someone else. Ultimately they gave us a similar van and there were no major issues, but this still left me feeling uneasy.
I imagine that renting via a different service like Zipcar wouldn’t have this issue, though I haven’t used Zipcar yet.
ZIpcar has a different set of problems; there is probably a car of the specific type you want available, but getting to it (and from it) may not be convenient, because it has a specific place in the city that it lives. ANd if that’s 30 minutes away by your fastest non-car transportation, that’s pretty frustrating.
I suppose for moving etc., you could call a cab to get you from old home to the Zipcar, and then again from the Zipcar parking to new home. That feels strange and probably-inefficient, but I don’t have evidence to back up that feeling.
I can see how that would be frustrating. I guess my experience is not representative. The nearest Zipcar spot is a 2 minute bike ride from where I live. There’s also the option of car2go, which seems to have a much larger coverage where I live, but also no variety in car choices. I’m not sure how much the variety matters, as I would use a car only if I need to transport something large (and I might just use a truck there; Home Depot is 15 minutes away) or if I was going a long distance beyond where public transit takes me (> 10 miles).
I actually use both Zipcar and car2go, and find they complement each other pretty well. Car2go is good for things where you don’t need to transport anything but yourself (and possibly one other person) and expect to spend most of the time at your destination rather than traveling, and enables spontaneous decisions; Zipcar is good for transporting large things, making substantial grocery runs (i.e. a monthly trip to Costco for purchasing in bulk rather than weekly things like fresh fruit/vegetables), or whenever you expect to spend most of your trip traveling, or when you need to make reservations well in advance.
Ah yes, I had made the mistake of not looking through the link and so wasn’t clear on what was or wasn’t included. Thanks for flagging that.
I don’t mean to dispute your preferences; I take for granted that different options make sense for different people or for people in different stages of life. However, I’ve now had this conversation with bicycle advocates a few times, and they always seem to assume that their needs are a close proxy for my needs, and they’re not.
Looking through the details of the cost benefit analysis, there’s a bunch of factors that aren’t obvious that do need to be included.
I’ve had jobs where there wasn’t good transit, and where it wasn’t feasible to relocate myself. Much of the United States doesn’t have usable transit, and does have bike-unfriendly geography.
If you have to move furniture, yes, you can rent a truck. But if you have two weeks of groceries, or a passenger and a few suitcases, a car is fine, and a bike (even with a trailer) is not fine. Car share isn’t a perfect substitute, since often the car-share is a significant distance from where you live, and since it isn’t reliably there when you want it. There’s real economic value to having the car exactly where you want it, when you want it.
Renting cars for vacation or medium-distance overnight travel can be an option. It isn’t included the linked-to calculation and it can get expensive depending on whether you need to keep the car rented for the period in which you aren’t actively using it.
Right now, some people own cars and some don’t. I’m sure there’s some status quo bias and some bias in favor of social convention for owning the car. Beyond that, why do you think people are making this decision irrationally? There’s enough car-free folks that I think it’s not that hard to see what the benefits and costs of the lifestyle are.
I appreciate your response and interest. This post turned out to be rather long.
I’ve now had this conversation with bicycle advocates a few times, and they always seem to assume that their needs are a close proxy for my needs, and they’re not.
I don’t claim bikes are right for everyone, but I do claim that they are right for a much larger fraction of the population than most believe.
Also, I think if you tried switching to bikes you’d find a lot of your “needs” aren’t actually such, or can be fulfilled adequately or better in ways that do not require a car.
But if you have two weeks of groceries, or a passenger and a few suitcases, a car is fine, and a bike (even with a trailer) is not fine.
People who have never tried to buy groceries with a bike tend to think it’s difficult or impossible. It’s perfectly fine if you have a bike with baskets. I buy groceries once per week. If I had a trailer, I could easily go a month or longer.
Any reason you need two weeks’ worth of groceries at once? It’s probably just because that’s what you’re used to getting in your car. Going once a week is not bad, in fact, I prefer it because I can get fresh food. There’s another advantage: I buy only what is necessary. There’s no room for junk food.
As for taking a passenger and luggage, it is perfectly possible. The main issue is that most bikes are designed for recreation, not utility. People don’t complain that sports cars can’t move mattresses easily, so don’t do the equivalent for sports bikes. There are two-seated bikes, sidecars, and cargo bikes. Such things are uncommon, but they do exist. A second bike is also a possibility for a “passenger”.
Car share isn’t a perfect substitute, since often the car-share is a significant distance from where you live, and since it isn’t reliably there when you want it.
You are overly pessimistic about car-share. I suspect that you overestimate how often someone would use car-share, and underestimate the reliability of such services. I haven’t used car-share once in the 6 months or so since I signed up, but I have checked it a few times. Every time I checked, cars were available. Car-share services have an incentive to be reliable.
As for the distance, that could be an issue, but most people who choose to not own cars move to places that fit their needs. If that includes car-share, they’d have it. Plus, car-share services are growing very quickly, so if it’s not there now, wait.
There also exist less formal car sharing services, and there’s always the possibility of bumming a ride off a friend.
There’s real economic value to having the car exactly where you want it, when you want it.
The value is largely subjective, and in my experience it tends to evaporate when you don’t own a car, though that might be selection bias. Also, don’t discount the disadvantages in your value calculation.
I’m sure there’s some status quo bias and some bias in favor of social convention for owning the car. Beyond that, why do you think people are making this decision irrationally?
Great question. I’m not entirely sure. I’ll list what I can think of.
First, I think very few people rationally think about their transit choices. To most people, driving is synonymous with transportation. Status-quo bias and familiarity are big factors. Consider it a learned cached thought.
Second, I don’t think most drivers understand the disadvantages of driving. Having discussed this with a number of people, they seem to be incredulous that driving could be unhealthy or expensive.
In the bicyclist literature, many people write about how they were genuinely surprised by how much money they have saved since they stopped driving. I suspect part of it is that few see the costs added up in total. Rather, you see them in smaller chunks: your car payment, your gas trip, your repair bill, etc. In his book How to Live Well Without Owning a Car, the author (Chris Balish) describes how he sold his car unintentionally early and started taking public transit. He thought this was temporary because he was waiting to buy a new car. At the end of the month he was checking his bank account and was absolutely shocked to see that he had $800 more than he usually does at the end of the month. He wasn’t sure if there was a mistake, so he calculated how much his car was previously costing him, and sure enough, it came out to $800 per month.
Third, there really does seem to be something outright irrational about people’s driving behavior. Yvain briefly mentioned this in his post titled Rational Home Buying. The best summary I have seen of this topic is in a book titled Commuting Stress. Researchers have repeatedly shown people are willing to commute by car for long distances in ways that do not compensate for anything they get out of it. It doesn’t matter that their job pays well, or that their house is large and cheap. They’re just stressed out and miserable from the commute. The book also details how people tend to prefer driving even when public transit is cheaper and faster. As I recall, the book suggested that the main factor behind these findings is the perception of control that cars provide. I’m not entirely sure I buy this theory or that I’m remembering it right.
Fourth, many people’s self-worth and status are related to their cars. Their car is part of their identity. They don’t make the choice to drive for rational reasons. There’s a similar group of bicyclists, though they are far fewer in number. A bike is a fashion accessory to these folks.
Fifth, there’s also the fact that (in the US) our transportation system is designed primarily for cars. All others modes of transportation are afterthoughts, which tends to make them inconvenient, dangerous, and/or inadequate. I hear from many people that they would ride bikes if it were not so dangerous. This is not necessarily irrational, but I think some folks overstate the danger of biking or understate the danger of driving.
There’s enough car-free folks that I think it’s not that hard to see what the benefits and costs of the lifestyle are.
I disagree. In my immediate social circle, I know zero people who don’t own cars or who don’t have access to a family car or something similar. I imagine this is true of most people in the US. To meet other folks like that I have to go to bicyclist meetups.
Yes. There are people who would be better off with a bike than a car. I take that point and I believe it’s easier than some people think.
The problem is that carfree isn’t the right choice for everybody and it’s not always obvious from the outside who it is or isn’t appropriate for. If you aren’t careful, advocacy here can come off as thinking you know your interlocutor’s life and needs better than they do. (Which is a bit irritating.)
I’m going to describe a bit about my experiences, just so that you and the readers have a sense why somebody might reasonably benefit from owning a car.
I live in a small town in the northeast. We don’t have particularly good public transit in, out, or around town. By transit, it’s about two hours from my door to the nearest major city. It’s less than half that by car. It’s cold and slushy here a lot and therefore not a particularly pleasant place to bike.
The grocery stores I typically go to are five miles away, via a major expressway that isn’t bike-safe. (The ones that are bikeable are small and expensive.)
When last I checked, there were only four car-share vehicles within ten miles. They’re heavily used.
I’m in a medium-distance relationship, and that involves a lot of medium-distance travel with overnight stays. Having a car makes this much cheaper for a given amount of visiting, and I think it’s worth some money to see my special friend.
I have some experience with the car-free life. I used to live in a dense urban area. Most of my friends didn’t own cars and didn’t want to. I myself was happily car-free for five years. I did the math before buying the car, and I’m pretty sure I come out well ahead.
I would be interested to calculate the health benefits and costs. I suspect this is hard to do, because the risk of bike accidents is hugely variable depending where you live and where you travel.
The problem is that carfree isn’t the right choice for everybody and it’s not always obvious from the outside who it is or isn’t appropriate for. If you aren’t careful, advocacy here can come off as thinking you know your interlocutor’s life and needs better than they do. (Which is a bit irritating.)
I understand how other-optimizing can go wrong. Different circumstances make different solutions optimal. A lot of what you described fits with my earlier knowledge. Still, previously I hadn’t considered that relationships could be an issue, but I’ve now learned better.
I have found that many drivers is are prone to other-optimizing cyclists. People frequently request (actually, insist) to give me rides because they think what I’m doing is dangerous or a bad idea for other reasons (convenience, largely). They see it as doing me a favor, but it’s actually rather annoying. I will oblige sometimes, but mainly when the weather is bad, or if it’ll help the requester feel better. I have never seen bicycle advocates be so assertive. Imagine if bike advocates regularly insisted that no, you aren’t going home in your car, you are taking a bike. Car “advocates” (if you will) have done the same for me many times.
I would be interested to calculate the health benefits and costs. I suspect this is hard to do, because the risk of bike accidents is hugely variable depending where you live and where you travel.
This is definitely hard, and it hasn’t been done yet for anywhere in the US. I previously wrote a post about the net effects of cycling on health, and in short, the only good study I could find on the subject used data from Europe. Europe is generally considered to be much safer for cyclists than the US. Many cycling advocates in the US cite this report as affirmation that cycling has net health benefits without realizing why it does not apply. I am not sure whether the average net health effects in your typical US city are positive, and I lean towards negative for the moment.
Another factor (that few recognize) is that the health benefits are reduced for people who are in good shape. I run fairly regularly, and thus the health benefits of cycling are limited for me. Though, the cycling has worked out well to keep me in reasonable shape when I’ve been too busy to run.
One can turn any expense into a high number by applying some not-quite-realistic rate of return[1] over a long period of time. I remember reading a web comic which applied this procedure to an iPhone, with enough creativity you could probably make coffee at Starbucks into a million-dollar expense as well.
In some sense it is true, if you invest regularly and wait a long time you’ll likely accumulate considerable savings. But singling out one particular expense for that kind of treatment, without the context which you provided above, is exactly what Lumifer called it: blatant propaganda.
[1] E.g., William Bernstein “Five Pillars of Investing” cites 3.5% long-term real (ie after inflation) rate of return from stocks.
The corresponding number at 3.5% is $500,000. I wasn’t trying to argue for any particular value, merely that the cited value isn’t wildly off base, and that long-term investment is how you get into its neighborhood.
Yes, I understand that and I didn’t mean to criticize your argument, which is good, I meant to attack the original source which was trying to impress the audience with a large number without explaining where it really comes from (which you did explain). Sorry that I didn’t express this more clearly.
The cited value isn’t wildly off base, in the same sense it wouldn’t be wildly off base to say that if you work at McDonald’s and invest every penny you made, after 40 years you’ll be a millionaire. So car ownership is really expensive in the same sense in which McDonald’s pays really well.
Not to accuse you of doing this, but I’m a little bemused at how my post seems to have been taken as a broad apologetic for the ancestor’s cost calculator when I was trying to make the point that, when you’re playing with values in the high hundreds of dollars per month, conclusions like “investing this will make you a millionaire after 35 years” prove a lot less than they sound like they do. Hell, the first sentence even says that bicycles aren’t the important thing to be thinking about there.
So in other words, I think we agree. I could probably have been clearer with my examples, though.
You seem to be reading in specificity that I didn’t put there. There aren’t any securities that can provide a guaranteed 5% rate of return after inflation, but those kinds of returns are fairly reasonable for a well-diversified portfolio (though they will of course go down from time to time). Maybe a little high, and mea culpa if so, but certainly not high enough to rate a “my ass”.
You seem to be reading in specificity that I didn’t put there.
Not really. I think that picking an arbitrary number and compounding it far into the future is a very flawed method of estimating future value—and that’s not just because this number is arbitrary.
certainly not high enough to rate a “my ass”.
My ass was specifically upset about the million dollar figure that the linked-to web page prominently waved about, not about anything in your post.
Some of the default numbers in the calculator do seem high to me, but that’s not the point. Not owning a car does save a lot of money. Do you own calculation with your own numbers if you are skeptical. I linked to this one because I think it is comprehensive.
I’ll unpack what I mean by “how much you’d save”. The savings is between two hypothetical situations. So yes, given the choice between buying a Lamborghini and not, you’d “save” money by not buying one. The same applies for a yacht. This language is commonly used colloquially.
Even if you didn’t include inflation, you’d end up with a large number for the default settings: about $275,000. This isn’t like the iPhone case Aleksander mentioned. Cars are quite expensive, and people do actually pay hundreds of thousands of dollars over their lifetimes. If your issue is with the inflation, consider the cost without inflation. If the duration is the issue, look at the costs for a single year. If you have a particular problem with any of the costs cited in the calculator, I’d be interested in learning which costs you think are unrealistic and why. Otherwise, I don’t understand your argument.
Doesn’t that make you suspect that this particular way of comparing things is nonsensical?
It makes perfect sense to me even if it’s not strictly correct. Your pedantry is what makes the least sense to me here.
Let’s say someone is buying a house. There are two houses which are more or less identical, but house A costs 10 times as much as house B. Do you not think it is makes sense to choose house B because it “saves” money? The car ownership comparison is between two vehicles, not between doing nothing and buying something if that is your issue.
My general problem with the linked-to web page is that it is precisely the thing that LW tries to teach to ignore.
It is not helpful advice or good evidence—it is a blatant attempt to force the reader to a predetermined conclusion. It is deliberately dishonest.
I don’t like manipulative propaganda which tries to pretend it’s just supplying facts.
I am seeing a trend. You make many assertions without evidence. Explain why you think this is “manipulative propaganda” or how you know the author is “deliberately dishonest”.
The most you’ve indicated so far is that you don’t like the investment analogy, though you ignored the part of my post that showed the investment analogy is not necessary to show that not owning cars is cheaper, or that you think the specific numbers in the calculator are “dubious”, but you have neglected to specify which you think are dubious.
If the calculator is trying to force the reader to a predetermined conclusion, I don’t think the author of it is doing such a great job given that you can change every number in it.
We agree that the million dollar number is high, and I do think the rate of interest is too high in the calculator. But, I’m afraid you’re being vague and hyperbolic otherwise.
There are two houses which are more or less identical … The car ownership comparison is between two vehicles
I’m sorry, are you asserting that a car is “more or less identical” to a bicycle?
This is turning into a pissing match which I don’t have much interest in at the moment. I stand by my opinions which you are, of course, free to disregard.
I’m sorry, are you asserting that a car is “more or less identical” to a bicycle?
No, the point of my analogy was to highlight the cost difference part of the reasoning, not to say that cars and bikes are largely equivalent. Both have very different advantages and disadvantages as means of transportation.
This is turning into a pissing match which I don’t have much interest in at the moment. I stand by my opinions which you are, of course, free to disregard.
Given that you already seem to have disregarded much of what I’ve written, I’ll oblige.
I recall, but am unable to find, a small study that looked into living in typical American suburbs and driving vs. living in the center city and taking public transit, walking, or biking. As I recall, the authors concluded that either is comparable in total costs for the “average” city. If that’s true, then I think it’s a strong case for living in the city given that people underestimate how stressful their commutes are and that you’ll save time.
Others, especially bicycle advocates, have made the same comparison. If you don’t own a car, you can turn your savings into higher rent. In my experience, you’ll easily save more money and time going the car-free route. I’d recommend the book How to Live Well Without Owning a Car for an introduction to this lifestyle. Note that this lifestyle is not for everyone, but I do think it’s a good idea for a large segment of the population.
Edit: I think this is the “study” I referred to above. It’s interesting to see how my memory distorted things; I thought of this as an academic study, and couldn’t find it among the papers I saved. No wonder, as it was merely a newspaper column.
Some time in the past couple hours, I got karmassassinated. Somebody went through and downvoted about 30 or so comments I’ve made, including utterly uncontroversial entries like this one and this one. It’s a trivial hit for me, but I mention it in case anyone is gathering data points to identify the source of the problem.
Some time in the past couple hours, I got karmassassinated. Somebody went through and downvoted about 30 or so comments I’ve made, including utterly uncontroversial entries like this one and this one. It’s a trivial hit for me, but I mention it in case anyone is gathering data points to identify the source of the problem.
Curious, I was just now seeking the latest Open Thread so that I could make the same observation—with near the same wording. If my memory of previous vote counts is correct then the change for me was exactly −3 across the board, for uncontroversial posts as much as the controversial ones. I wonder if our interactions in the past day or so include any overlap with respect to who we were arguing with. That wouldn’t be nearly enough evidence to be confident about the culprit(s?) but enough to prompt keeping an eye out. Like you the hit is trivial to me (it doesn’t put a dent in the ~30k karma and even the last week karma remains distinctly positive).
Those karmassassains (and some others who share their ill-will but have different ethics) may be pleased to note that I’m likely to give them exactly what they want. This is a rare enough response for me that I can’t help but share my surprise. I am candidly and highly averse to supplying an incentive structure whereby defective behaviour is rewarded with desired outcomes rather than worse outcomes. As a core aesthetic that pattern is abhorrent to me. Yet even for me the preference has limits and the opportunity cost for satisfying that preference can be too high.
There are many people on lesswrong that I respect and value discussing and exploring new concepts with. Yet by the very nature of internet forums the people who are most valuable to talk to aren’t the ones you end up talking to the most. Simply because putting “I agree” all over the place is considered spam and it is hard to reply in a cooperative ‘agree and elaborate to keep the ideas flowing’ manner because people are so damn conditioned to consider all replies to be somewhere at the core arguments opposing them or to be condescension.
I decided six months ago that for me personally the impulses regarding people wrong on the internet are too much of a liability now that the demographic here has changed so drastically from when we seeded the site with the OvercomingBias migration. I might try back in another six months—or perhaps if I reconfigure my supplement regime more in the direction of things that I know increase my inclination towards navigating petty social games elegantly. For now, however, real world people are just so much more enjoyable to talk to than internet people.
To the various folk I’ve been chatting to over PM: I’m not snobbing you, I’m just not here.
I don’t understand: are you leaving the forum because of the karmassassins or because of “people wrong on the internet”? These seem like very different reasons.
You’d be missed, wedrifid. Not that my opinion counts for much (being one tenth the veteran you are), but there you have it.
(I did downvote you occasionally. I also am in favor of more explicit rules regarding what voting patterns are considered to be abusive versus valid expressions of one’s intent. There is no consensus even amongst old-timers, if memory serves e.g. Vladimir Nesov—among others—saw karmassassinations as a valid way of signalling that you’d like someone to leave the forums. There may an illusion of transparency at work—what is an obvious misuse to you may not seem so to others, unless told so explicitly. ETA: I’d like some instructions from the editor on this topic, a.k.a. “I NEED AN ADULT!”)
I don’t endorse indiscriminate downvoting, but occasionally point out that fast systematic downvoting can result from fair judgement of a batch of systematically bad comments.
(Prismattic’s counterexamples, if indeed from the same set, indicate that it’s not the case here.)
The worst possible reaction to this phenomenon is to point it out publicly rather than to quietly report it to whoever cares (currently no one among the site admins), since you noticing it, even occasionally, is enough of a positive reinforcement for the culprit to continue. I also mentioned on occasion that unfairly downvoted comments tend to get upvoted back up over time, so no point sweating it. So I am downvoting your comment to encourage you to silently shrug off karma sniping in the future.
I disagree. If you value the contributions of comments above your or your aggressor’s ego—which ideally you should—then it would be a good decision to make others aware that this behavior is going on, even at the expense of providing positive reinforcement. After all, the purpose of the karma system is to be a method for organizing lists of responses in each article by relevance and quality. Its secondary purpose as a collect-em-all hobby is far, far less important. If someone out there is undermining that primary purpose, even if it’s done in order to attack a user’s incorrect conflation of karma with personal status, it should be addressed.
Do you take notes when you read non-fiction you want to analyse? If so, how much detail? On the first reading? Just points of disputation, or an effort at a summary?
I tend to go for notes chapter-by-chapter. Among other things, it takes long enough to read a chapter that I get to the point where I can remember any particular idea with ease but the flow of concepts has mostly been lost and all of the pieces have been shunted into long-term memory. If I can mostly reconstruct the chapter, great, if not, I go back and figure out what was where and why it was there. (It might be worthwhile to always go back and see what you missed / got wrong, but that would probably get close to doubling the necessary reading time.)
I tried doing this briefly when I was experimenting with Workflowy but I found it excruciatingly boring and couldn’t keep it up; it was close to ruining reading non-fiction for me and I stopped immediately when I noticed that.
Workflowy’s not the best tool for note-taking—it’s great for making structured lists of items that you only need to identify or briefly describe, making it a fantastic e.g. task list, but adding more structure to any particular item is pretty clunky (though at least possible).
I’ve historically used Keynote NF, but it’s PC-only. Currently looking for an app that does the same thing on iDevices, since my iPad’s becoming my go-to note-taking tool, but I haven’t found anything that does everything I want yet.
Yes, if I don’t take notes on the first reading there won’t be a second reading. Not much detail—more than a page is a problem (this can be ameliorated though, see below). I make an effort to include points of particular agreement, disagreement and some projects to test the ideas (hopefully projects I actually want to do rather than mere ‘toy’ projects).
Now would be a good time to mention TreeSheets, which I feel solves a lot of the problems of more established note-taking methods (linear, wiki, mindmap). It can be summarized as ‘infinitely nestable spreadsheet/database with infinite zoom’. I use it for anything that gets remotely complex, because of the way it allows you to fold away arbitrary levels of detail in a visually consistent way.
Use a piece of paper as a bookmark on which I take notes (noting page numbers of bits I don’t understand, attempts to summarize/reorganize, interesting insights, notes while I work something out, random ideas the text gives me) - it’s not rare that I end a book with two or three pages of notes stuffed into them. I’ll then go over those notes and maybe enter some bits in Anki
Directly enter stuff in Anki if it’s atomic enough (it often isn’t)
Takes notes in Google docs (either if I’m near a computer at a time, or if I want to have “searchable” notes or look up related info on the internet)
Usually I’ll read it in depth first, then once I know if it’s worth taking notes, I’ll return to it and scan through quickly for those points I know are worth grabbing.
I’ve fairly recently (over the past month or so) started taking notes on pretty much everything, as part of a drive to capture as much useful content in Evernote as possible. A lot of what I’m doing at the moment is probably quite wasteful, but I expect to figure out what is and isn’t useful in fairly short order.
For ebooks I’ve been making judicious use of highlighting on the Kindle. Unfortunately the UK Kindle service isn’t as feature-rich as the US counterpart, so I’m still looking into ways of parsing my clippings file into Evernote. For hardcopy books and lectures, I’ve taken to either writing bullet-pointed lists or mini-essays. This also seems to have the positive side-effects of forcing me to clearly elucidate on ideas I’ve just taken in, and stopping me ruminating on the areas in question.
For example, late last night I was reading about the concept of “burden of proof” in legal and rhetorical contexts. This is a bit of a personal bugbear, and I ended up writing several hundred words informed by what I was reading. Not only can I now reference this when necessary, but it stopped me from trying to sleep with a bunch of proactive burden-of-proof-related arguments running through my head.
As I read textbooks, I summarize the most important concepts (along with doing the exercises, if there are any) and write them in a notebook and then later (less than a week) enter the notes into Anki as cloze-delete flashcards. I don’t have an objective measure of retention, but I believe that it has vastly improved relative to when I would simply read the book.
I constantly take notes on papers that I read. If the paper’s topic is familiar to me, I just take summary notes and points of dispute. If the paper has math or lots of unfamiliar terminology (especially common in anatomy and biochemistry), then I copy paragraphs from the paper as I read them, and then reformat the copied sentences (usually breaking up clauses) or then work out the math for myself.
I take notes for two main reasons. One, my memory is poor, and if I didn’t take notes I would just lose all of the research I do. I’ve completely forgotten the control theory I read a few months ago, but it doesn’t feel like a loss because it’s still in my exocortex. Second, I tend to hoard info, and if I didn’t summarize and discard the papers I read, they would accumulate in my documents folder without limit. Even while taking notes, I gain about a thousand papers a year. Hard drives are cheap, but there’s still a huge cost to not being able to find the paper you need when you need it.
He is a rationalist who is deeply against living by social norms and just sees them as defaults, and is “non-default” about pretty much everything including work path, values etc., as well as lifestyle including cooking (lives off takeaway so as not to spend time grocery shopping and cooking), cleaning (does not have much of a regular cleaning habit – I broke glass in his kitchen a month ago and he said I shouldn’t have to clean it up and it’s still there), sleeping (he has no regular sleep schedule and sleeps when he wants to. The kind of work that he does is largely from home with long deadlines. He ships a prescription anti-narcolepsy from overseas which allows him to stay awake for long stretches on little sleep – although he plans on giving this up soon). He also takes party drugs and for a while, was taking quite high amounts of MDMA on a weekly basis, which pretty much wiped him out the day or two after. I have always been uncomfortable around drugs, although he did not really know the extent of my discomfort, and I can’t take them myself due to mental health. He dropped back to once a month after I expressed concerns about escalation and he acknowledges that he has some susceptibility to addiction, although he is not currently dependent.“
One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested. I was furious at his intellectual arrogance and the danger he had put us both in. I lost a week of unpaid time off work and my mum had to nurse me through my allergic reaction to the treatment. I told him I wanted to break up, but we ended up supporting each other through the treatment and ultimately decided to get back together and work things out.
That he fails at basic instrumental rationality. I would be very interested in seeing a valid cost-benefit analysis which can justify leaving dangerous broken glass around, eating only take-out, and ignoring the risk of STI...
What I make of it is that “rationalist” is getting to sound cool enough that there are going to be people who claim to be rationalists even though they aren’t notably rational.
Lists of “how to identify a real rationalist” will presumably run up against Goodhart’s Law, but it still might make sense to start working on them.
Just because a manipulative narcissistic asshole calls himself a rationalist, it doesn’t make him rational in the meaning of the word coined by Eliezer and generally shared here.
He is a rationalist who is deeply against living by social norms and just sees them as defaults, and is “non-default” about pretty much everything
As soon as I read that, I thought “uh oh, this is bad...”, long before getting to the part about the STI. And unfortunately, this first sentence describes too many people in the LessWrong community, even ones who are more careful about STIs. Maybe this will be a wakeup call to people to stop equating “rationalist” with “rejecting social norms.”
I think this one by Yvain works as a plausible explanation for why this is unlikely to change.
Do you deliberately pick topics that cause controversy here, or is your model of this community flawed? Either way I find people’s reactions to your posts amusing.
I think this one by Yvain works as a plausible explanation for why this is unlikely to change.
I love Yvain’s post on meta-contrarianism, and yeah, it pinpoints a major source of the problem. I guess I tend to be slightly more optimistic about the possibility of LessWrong changing in this regard, but maybe you’re right.
Do you deliberately pick topics that cause controversy here, or is your model of this community flawed? Either way I find people’s reactions to your posts amusing.
When I write my more controversial posts, I do so knowing I’m going against views that are widely-held in the community, though I often had difficulty predicting what the exact reaction will be.
If you’re going to argue using appeals to tradition, it helps to know something about the history of the tradition you’re appealing to. In particular whether it has centuries of experience behind it or is merely something some meta-contrarians from the previous generation thought was a good idea.
One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.
If only there was a simple magic word that transferred control of one’s own sexual health into one’s own hands. Like “No”, for instance. For creative emphasis or in response to repeated attempts to initiate sex despite refusal to honour basic safety requests there are alternative expressions of refusal such as “You want to put that filthy, infested thing inside me? Eww, gross!”
The letter writer mentions her (ex-)boyfriend’s OK Cupid account screenname in the comments. I looked at it and didn’t recognize him. I checked the same screenname on Reddit, which she said he also used (no account under that name) and here (an account exists by that name, but I don’t think it’s the same person—in particular the OKC account has a characteristic punctuation error that the local account doesn’t make). If anyone from Missouri wants to see if he looks familiar there are breadcrumbs to follow.
It’s possible that the choice of the word “rationalist” was a coincidence and this is not a peripheral community member mistreating his Muggle girlfriend, but just some random guy. I think it is worth finding out if we can.
There is often mentioned “LW” in the comments, but it seems to be an abbreviation for Letter Writer (the person who wrote the letter about the “rationalist”), not LessWrong. It took me some time to realize this.
Well, I expected that making “rationality” popular would bring some problems. If we succeed to make the word “rationality” high-status, suddenly all kinds of people will start to self-identify as “rationalists”, without complying with our definition. (And the next step will be them trying to prove they are the real “rationalists”, and all the others are fakes.) But I didn’t expect this kind of thing, and this soon.
On the other hand, there doesn’t have to be any connection with us. (EDIT: I was wrong here.) I mean… LessWrong does not have a copyright on “rationality”.
Thanks for pointing this out; I didn’t read all the comments previously (only the first third, or so) because there is so many of them. (Here is a link to the HPMoR comment, for other curious people.) I’ve read the remaining ones now.
By the way, the comments are closed today. (They were still open yesterday.) I am happy someone was fast enough to post there this:
tl;dr: LW, this dude is calling himself “rational” but is not rational.
Reading the comments, I am impressed by their high quality. I actually feared something like using “rationality” as a boo light, but there is only an occassional fallacy of gray (everyone is equally irrational), and only a very few commenters try to generalize the behavior to men in general. Based on my experience from the rest of the internet, I expected much more of that. Actually, there are also some very smart comments, like:
it is rational and logical to take emotions into account. Emotions are real things that human beings have – we have them often for good reasons, and we’re not Vulcans (besides, I’m betting both Spock and Tuvok have really neat clean quarters and would never leave broken glass lying around to defy the man, because it would not be logical). Anyway. Emotions are valid. Caring for the emotional well being of your loved ones is important and also a rational choice. People have different preferences for things, and feel differently about things, and negotiating those differences is a huge part of a good relationship.
If by chance the person who wrote the letter comes here, I strongly recommend reading “The Mask of Sanity” for a descriptions of how psychopaths work. I believe some of the examples would pattern-match very strongly.
And the lesson for the LessWrong community is probably this: Some psychopaths will find LW and HPMoR, and will use “rationality” as their excuse. We should probably have some visible FAQ that contradicts them. (On a second thought: Having the FAQ on LessWrong would not have helped in this specific case, because the abusive boyfriend only showed her HPMoR. And having this kind of disclaimer on HPMoR would probably feel weird. Maybe the best solution would be to have a link to the LessWrong FAQ on the HPMoR web page; something like: “This fan fiction is about rationality. Read here more about what is—and what isn’t—considered rational by its author.)
One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.
I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.
(...) despite being informed that a previous partner had been infected (...)
So uh, let’s run down the checklist...
[ X ] Proclaims rationality and keeps it as part of their identity. [ X ] Underdog / against-society / revolution mentality. [ X ] Fails to credit or fairly evaluate accepted wisdom. [ ] Fails to produce results and is not “successful” in practice. [ X ] Argues for bottom-lines. [ X ] Rationalizes past beliefs. [ X ] Fails to update when run over by a train of overwhelming critical evidence.
Well, at least, there’s that, huh? From all evidence, they do seem to at least succeed in making money and stuff. And hold together a relationship somehow. Oh wait, after reading original link, looks like even that might not actually be working!
That really depends on the game. Take Ninja Gaiden, or Super Mario Brothers, or Castlevania 1 - the difficulty ramps steeply but your characters’ abilities do not ramp at all. Zelda levels generally get harder faster than you get tougher (with some exceptions).
In some games, choosing the right advancements is a major part of the game. It’s seen most clearly in Epic Battle Fantasy 2: over the course of the game, you get 10 abilities (and only 10, out of a long lineup); picking the right ones (and ensuring that you qualify for them) is a lot of the challenge of the game. There is some of a ‘numbers go up’ element to it, but if you don’t pick the right things, you are screwed—and there’s no grinding to get ’em all. The other installments in the series unfortunately lack this.
That said, I play single-player games a whole lot less than I used to, partially due to this.
I got around to watching Her this weekend, and I must say: That movie is fantastic. One of the best movies I’ve ever watched. It both excels as a movie about relationships, as well as a movie about AI. You could easily watch it with someone who had no experience with LessWrong, or understanding of AI, and use it as an introduction to discussing many topics.
While the movie does not really tackle AI friendliness, it does bring up many relevant topics, such as:
Intelligence Explosion. AIs getting smarter, in a relatively short time, as well as the massive difference in timescales between how fast a physical human can think, and an AI.
What it means to be a person. If you were successful in creating a friendly or close to friendly AI that was very similar to a human, would it be a person? This movie would influence people to answer ‘yes’ to that question.
Finally, the contrast provided between this show and some other AI movies like Terminator, where AIs are killer robots at war with humanity, could lead to discussions about friendly AI. Why is the AI in Her different from Terminators? Why are they both different from a Paperclip Maximizer? What do we have to do to get something more like the AI in Her? How can we do even better than that? Should we make an AI that is like a person, or not?
I highly recommend this movie to every LessWrong reader. And to everyone else as well, I hope that it will open up some people’s minds.
I haven’t seen Her yet, but this reminds me of something I’ve been wondering about.… one of the things people do is supply company for each other.
A reasonably competent FAI should be able to give you better friends, lovers, and family members then the human race can. I’m not talking about catgirls, I’m talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.
I’m not talking about catgirls, I’m talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.
I had in my head, and had asserted above, that “catgirl” in Sequences jargon implied philosophical zombiehood. I admit to not having read the relevant post in some time.
No slight is intended against actual future conscious elective felinoforms rightly deserving of love.
Yeah. People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
People need to be needed, but that doesn’t mean they need to be needed for something in particular. It’s a flexible emotion. Just keep someone of matching neediness around for mutual needing purposes.
And when all is fixed, I’ll say: “It needn’t be possible to lose you, that it be true I’d miss you if I did.”
People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
From an old blog post I wrote:
Imagine that, after your death, you were cryogenically frozen and eventually resurrected in a benevolent utopia ruled by a godlike artificial intelligence.
Naturally, you desire to read up on what has happened after your death. It turns out that you do not have to read anything, but merely desire to know something and the knowledge will be integrated as if it had been learnt in the most ideal and unbiased manner. If certain cognitive improvements are necessary to understand certain facts, your computational architecture will be expanded appropriately.
You now perfectly understand everything that has happened and what has been learnt during and after the technological singularity, that took place after your death. You understand the nature of reality, consciousness, and general intelligence.
Concepts such as creativity or fun are now perfectly understood mechanical procedures that you can easily implement and maximize, if desired. If you wanted to do mathematics, you could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.
But you also learnt that everything you could do has already been done, and that you could just integrate that knowledge as well, if you like. All that is left to be discovered is highly abstract mathematics that requires the resources of whole galaxy clusters.
So you instead consider to explore the galaxy. But you become instantly aware that the galaxy is unlike the way it has been depicted in old science fiction novels. It is just a wasteland, devoid of any life. There are billions of barren planets, differing from each other only in the most uninteresting ways.
But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Yes, sex! But you realize that you already thoroughly understand what it is that makes exploration and sex fun. You know how to implement the ideal adventure in which you save people of maximal sexual attractiveness. And you also know that you could trivially integrate the memory of such an adventure, or simulate it a billion times in a few nanoseconds, and that the same is true for all possible permutations that are less desirable.
You realize that the universe has understood itself.
Yes, if you skip to the end, you’ll be at the end. So don’t. Unless you want to. In which case, do.
How long are you going to postpone the end? After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
Now your answer to this seems to be that you can also read it very slowly, or with a very low IQ, so that it will take you a really long time to do so. I am not the kind of person who would enjoy to artificially slow down amusement, such as e.g. learning category theory, if I could also learn it quickly.
After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
to which I’d reply “Possibly, but that feels really stupid.”
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.
Much like a five-year old realizing that he won’t be enjoying snakes-and-ladders anymore when he’s grown up, and thus concluding adults lives must be super-dull, I find scenarios of future ultimate boredom to be extremely shortsighted.
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what? Do adults need to either enjoy snakes-and-ladders or live lives of boredom?
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what?
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Now consider another program whose computation would cause an emulation of you to understand ArisKatsaris-CEV to such an extent that it would become as predictable and interesting as a game of Tic-tac-toe. I will call this program ArisKatsaris-SELF.
The options I see are to make sure that ArisKatsaris-CEV does never turn into ArisKatsaris-SELF or to maximize ArisKatsaris-CEV. The latter possibility would be similar to paperclip maximizing, or wireheading, from the subjective viewpoint of ArisKatsaris-SELF, as it would turn the universe into something boring. The former option seems to set fundamental limits to how far you can go in understanding yourself.
The gist of the problem is that a certain point you become bored of yourself. And avoiding that point implies stagnation.
You’re mixing up different things: (A)- a program which will produce an optimal existence for me (B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics...
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
Then I wasn’t clear enough, because that’s not what I tried to say.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Consider calling it something else. That isn’t CEV.
Do you think that’s likely? My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
The problem is that if you perfectly understand the process of art generation. There are cellular automata that generate novel music. How much do you value running such an automata and watching it output music? To me it seems that the value of novelty is diminished by the comprehension of the procedures generating it.
Certainly a smart enough AGI would be a better companion for people than people, if it chose to. Companions, actually, there is no reason “it” should have a singular identity, whether or not it had a human body. Some of it is explored in Her, but other obvious avenues of AI development are ignored in favor of advancing a specific plot line.
There is an obvious comparison to porn here, even though you disclaim ‘not catgirls’.
Anyhow I think the merit of such a thing depends on a) value calculus of optimization, and b) amount of time occupied.
a)
Optimization should be for a healthy relationship, not for ‘satisfaction’ of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
Optimization should also attempt to give you better actual family members, lovers, friends than you currently have (by improving your ability to relate to people sufficiently that you pass it on.)
b)
Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more. (This could be much easier to solve on the FAI side because a mental timeshare between relating to several people is quite possible.)
Providing that optimization is in the general directions shown above, this doesn’t seem to be a significant X-risk. Otherwise it is.
This leaves aside the question of whether the FAI would find this an efficient use of their time (I’d argue that a superintelligent/augmented human with a firm belief in humanity and grasp of human values would appreciate the value of this, but am not so sure about a FAI, even a strongly friendly AI. It may be that there are higher level optimizations that can be performed to other systems that can get everyone interacting more healthily [for example, reducing income differential))
There is an obvious comparison to porn here, even though you disclaim ‘not catgirls’.
You’re aware that ‘catgirls’ is local jargon for “non-conscious facsimiles” and therefore the concern here is orthogonal to porn?
Optimization should be for a healthy relationship, not for ‘satisfaction’ of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
If you don’t mind, please elaborate on what part of “healthy relationship” you think can’t be cashed out in preference satisfaction (including meta-preferences, of course). I have defended the FiO relationship model elsewhere; note that it exists in a setting where X-risk is either impossible or has already completely happened (depending on your viewpoint) so your appeal to it below doesn’t apply.
Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more.
Valuable relationships don’t have to be goal-directed or involve learning. Do you not value that-which-I’d-characterise-as ‘comfortable companionship’?
You’re aware that ‘catgirls’ is local jargon for “non-conscious facsimiles” and therefore the concern here is orthogonal to porn?
Oops, had forgotten that, thanks.
I don’t agree that catgirls in that sense are orthogonal to porn, though. At all.
If you don’t mind, please elaborate on what part of “healthy relationship” you think can’t be cashed out in preference satisfaction
No part, but you can’t merely ‘satisfy preferences’.. you have to also not-satisfy preferences that have a stagnating effect. Or IOW, a healthy relationship is made up of satisfaction of some preferences, and dissatisfaction of others -- for example, humans have an unhealthy, unrealistic, and excessive desire for certaintly.
This is the problem with CelestAI I’m pointing to, not all your preferences are good for you, and you (anybody) probably aren’t mentallly rigorous enough that you even have a preference ordering over all sets of preference conflicts that come up. There’s one particular character that likes fucking and killing.. and drinking.. and that’s basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.
To look at it in a different angle, a halfway-sane AI has the potential to abuse systems, including human beings, at enormous and nigh-incomprehensible scale, and do so without deception and through satisfying preferences. The indefiniteness and inconsistency of ‘preference’ is a huge security hole in any algorithm attempting to optimize along that ‘dimension’.
Do you not value that-which-I’d-characterise-as ‘comfortable companionship’?
Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us. It must challenge us, and if that challenge is well executed, we will often experience a sense of dissatisfaction as a result.
(mere goal directed behaviour mostly falls short of this benchmark, providing rather inconsistent levels of challenge.)
I don’t agree that catgirls in that sense are orthogonal to porn, though. At all.
Parsing error, sorry. I meant that, since they’d been disclaimed, what was actually being talked about was orthogonal to porn.
No part, but you can’t merely ‘satisfy preferences’.. you have to also not-satisfy preferences that have a stagnating effect.
Only if you prefer to not stagnate (to use your rather loaded word :)
I’m not sure at what level to argue with you at… sure, I can simultaneously contain a preference to get fit, and a preference to play video games at all times, and in order to indulge A, I have to work out a system to suppress B. And it’s possible that I might not have A, and yet contain other preferences C that, given outside help, would cause A to be added to my preference pool:
“Hey dude, you want to live a long time, right? You know exercising will help with that.”
All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don’t just get to add one in.
for example, humans have an unhealthy, unrealistic, and excessive desire for certainty.
I’m not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren’t quite the same thing.
There’s one particular character that likes fucking and killing.. and drinking.. and that’s basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.
(assuming you’re talking about Lars?)
Sorry, I can’t read this as anything other than “he is aesthetically displeasing and I want him fixed”.
Lars was not conflicted. Lars wasn’t wishing to become a great artist or enlightened monk, nor (IIRC) was he wishing that he wished for those things. Lars had some leftover preferences that had become impossible of fulfilment, and eventually he did the smart thing and had them lopped off.
You, being a human used to dealing with other humans in conditions of universal ignorance, want to do things like say “hey dude, have you heard this music/gone skiing/discovered the ineffable bliss of carving chair legs”? Or maybe even “you lazy ass, be socially shamed that you are doing the same thing all the time!” in case that shakes something loose. Poke, poke, see if any stimulation makes a new preference drop out of the sticky reflection cogwheels.
But by the specification of the story, CelestAI knows all that. There is no true fact she can tell Lars that will cause him to lawfully develop a new preference. Lars is bounded. The best she can do is create a slightly smaller Lars that’s happier.
Unless you actually understood the situation in the story differently to me?
Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us.
I disagree. There is no moral duty to be indefinitely upgradeable.
All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don’t just get to add one in.
Totally agree. Adding them in is unnecessary, they are already there. That’s my understanding of humanity—a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.
for example, humans have an unhealthy, unrealistic, and excessive desire for certainty.
I’m not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren’t quite the same thing.
Good point, ‘closure’ is probably more accurate; It’s the evidence (people’s outward behaviour) that displays ‘certainty’.
Absolutely disagree that Lars is bounded—to me, this claim is on a level with ‘Who people are is wholly determined by their genetic coding’. It seems trivially true, but in practice it describes such a huge area that it doesn’t really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That’s one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.
I would agree if the proposition was that Lars thinks that Lars is bounded. But that’s not a very interesting proposition, and has little bearing on Lars’ actual situation.. people tend to be terrible at having accurate beliefs in this area.
* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.
There is no true fact she can tell Lars that will cause him to lawfully develop a new preference.
If I’m a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people’s circumstances, as much or more as by simply stating any actual truth.
She herself states precisely:
“I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery… She later clarifies “it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload.”
CelestAI does not have a universal lever—she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn’t have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI—and the latter does not even proceed logically from her own rules, it’s just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn’t do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.
That said, Lars isn’t necessarily ‘broken’, that CelestAI would need to ‘fix’ him. But I’ll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn’t, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.
There is no moral duty to be indefinitely upgradeable.
I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we’re physically bounded, but our mental life seems to be very much like an onion, that nobody reaches ‘the extent of their development’ before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.
Already having that capacity, the ‘moral duty’ (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.
That’s my understanding of humanity—a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.
This seems to weaken “preference” to uselessness. Gandhi does not prefer to murder. He prefers to not-murder. His human brain contains the wiring to implement “frothing lunacy”, sure, and a little pill might bring it out, but a pill is not a fact. It’s not even an argument.
People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously.
Yes, they do. And if I expected that an activity would cause a dramatic preference reversal, I wouldn’t do it.
She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people’s circumstances, as much or more as by simply stating any actual truth.
Huh? She’s just changing people’s plans by giving them chosen information, she’s not performing surgery on their values -
Hang on. We’re overloading “preferences” and I might be talking past you. Can you clarify what you consider a preference versus what you consider a value?
Gandhi does not prefer to murder. He prefers to not-murder. His human brain contains the wiring to implement “frothing lunacy”, sure, and a little pill might bring it out, but a pill is not a fact. It’s not even an argument.
No pills required. People are not 100% conditionable, but they are highly situational in their behaviour. I’ll stand by the idea that, for example, anyone who has ever fantasized about killing anyone can be situationally manipulated over time to consciously enjoy actual murder. Your subconscious doesn’t seem to actually know the difference between imagination and reality, even if you do.
Perhaps Gandhi could not be manipulated in this way due to preexisting highly built up resistance to that specific act. If there is any part of him, at all, that enjoys violence, though, it’s a question only of how long it will take to break that resistance down, not of whether it can be.
People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously.
Yes, they do. And if I expected that an activity would cause a dramatic preference reversal, I wouldn’t do it.
Of course. And that is my usual reaction, too, and probably even the standard reaction—it’s a good heuristic for avoiding derangement. But that doesn’t mean that it is actually more optimal to not do the specified action. I want to prefer to modify myself in cases where said modification produces better outcomes.
In these circumstances if it can be executed it should be. If I’m a FAI, I may have enough usable power over the situation to do something about this, for some or even many people, and it’s not clear,as it would be for a human, that “I’m incapable of judging this correctly”.
In case it’s not already clear, I’m not a preference utilitarian—I think preference satisfaction is too simple a criteria to actually achieve good outcomes. It’s useful mainly as a baseline.
Huh? She’s just changing people’s plans by giving them chosen information, she’s not performing surgery on > their values
Did you notice that you just interpreted ‘preference’ as ‘value’?
This is not such a stretch, but they’re not obviously equivalent either.
I’m not sure what ‘surgery on values’ would be. I’m certainly not talking about physically operating on anybody’s mind, or changing that they like food, sex, power, intellectual or emotional stimulation of one kind or another, and sleep, by any direct chemical means, But how those values are fulfilled, and in what proportions, is a result of the person’s own meaning-structure—how they think of these things. Given time, that is manipulable. That’s what CelestAI does.. it’s the main thing she does when we see her in interactiion with Hofvarpnir employees.
In case it’s not clarified by the above: I consider food, sex, power, sleep, and intellectual or emotional stimulation as values, ‘preferences’ (for example, liking to drink hot chocolate before you go to bed) as more concrete expressions/means to satisfy one or more basic values, and ‘morals’ as disguised preferences.
EDIT: Sorry, I have a bad habit of posting, and then immediately editing several times to fiddle with the wording, though I try not to to change any of the sense. Somebody already upvoted this while I was doing that, and I feel somehow fraudulent.
No pills required. People are not 100% conditionable, but they are highly situational in their behaviour. I’ll stand by the idea that, for example, anyone who has ever fantasized about killing anyone can be situationally manipulated over time to consciously enjoy actual murder.
I think I’ve been unclear. I don’t dispute that it’s possible; I dispute that it’s allowed.
You are allowed to try to talk me into murdering someone, e.g. by appealing to facts I do not know; or pointing out that I have other preferences at odds with that one, and challenging me to resolve them; or trying to present me with novel moral arguments.
You are not allowed to hum a tune in such a way as to predictably cause a buffer overflow that overwrites the encoding of that preference elsewhere in my cortex.
The first method does not drop the intentional stance. The second one does. The first method has cognitive legitimacy; the person that results is an acceptable me. The second method exploits a side effect; the resulting person is discontinuous from me. You did not win; you changed the game.
Yes, these are not natural categories. They are moral categories.
Yes, the only thing that cleanly separates them is the fact that I have a preference about it. No, that doesn’t matter. No, that doesn’t mean it’s all ok if you start off by overwriting that preference.
I want to prefer to modify myself in cases where said modification produces better outcomes.
But you’re begging the question against me now. If you have that preference about self-modification... and the rest of your preferences are such that you are capable of recognising the “better outcomes” as better, OR you have a compensating preference for allowing the opinions of a superintelligence about which outcomes are better to trump your own...
then of course I’m going to agree that CelestAI should modify you, because you already approve of it.
I’m claiming that there can be (human) minds which are not in that position. It is possible for a Lars to exist, and prefer not to change anything about the way he lives his life, and prefer that he prefers that, in a coherent, self-endorsing structure, and there be nothing you can do about it.
This is all the more so when we’re in a story talking about refactored cleaned-up braincode, not wobbly old temperamental meat that might just forget what it preferred ten seconds ago. This is all the more so in a post-scarcity utopia where nobody else can in principle be inconvenienced by the patient’s recalcitrance, so there is precious little “greater good” left for you to appeal to.
If I’m a FAI, I may have enough usable power over the situation to do something about this, for some or even many people, and it’s not clear,as it would be for a human, that “I’m incapable of judging this correctly”.
Appealing to the flakiness of human minds doesn’t get you off the moral hook; it is just your responsibility to change the person in such a way that the new person lawfully follows from them.
This is not any kind of ultimate moral imperative. We break it all the time by attempting to treat people for mental illness when we have no real map of their preferences at all or if they’re in a state where they even have preferences. And it makes the world a better place on net, because it’s not like we have the option of uploading them into a perfectly safe world where they can run around being insane without any side effects.
She later clarifies “it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload.”
there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn’t do anything to promote such progress
I need to reread and see if I agree with the way you summarise her actions. But if CelestAI breaks all the rules on Earth, it’s not necessarily inconsistent—getting everybody uploaded is of overriding importance. Once she has the situation completely under control, however, she has no excuses left—absolute power is absolute responsibility.
and ‘morals’ as disguised preferences.
I’m puzzled. I read you as claiming that your notion of ‘strengthening people’ ought to be applied even in a fictional situation where everyone involved prefers otherwise. That’s kind of a moral claim.
(And as for “animalisticness”… yes, technically you can use a word like that and still not be a moral realist, but seriously? You realise the connotations that are dripping off it, right?)
You are allowed to try to talk me into murdering someone, e.g. by appealing to facts I do not know; or pointing out that I have other preferences at odds with that one, and challenging me to resolve them; or trying to present me with novel moral arguments. You are not allowed to hum a tune in such a way as to predictably cause a buffer overflow that overwrites the encoding of that preference elsewhere in my cortex
.. And?
Don’t you realize that this is just like word laddering? Any sufficiently powerful and dedicated agent can convince you to change your preferences one at a time. All the self-consistency constraints in the world won’t save you, because you are not perfectly consistent to start with, even if you are a digitally-optimized brain. No sufficiently large system is fully self-consistent, and every inconsistency is a lever. Brainwashing as you seem to conceive of it here, would be on the level of brute violence for an entity like CelestAI.. A very last resort.
No need to do that when you can achieve the same result in a civilized (or at least ‘civilized’) fashion. The journey to anywhere is made up of single steps, and those steps are not anything extraordinary, just a logical extension of the previous steps.
The only way to avoid that would be to specify consistency across a larger time span.. which has different problems (mainly that this means you are likely to be optimized in the opposite direction—in the direction of staticness—rather than optimized ‘not at all’ (i think you are aiming at this?) or optimized in the direction of measured change)
TLDR: There’s not really a meaningful way to say ‘hacking me is not allowed’ to a higher level intelligence, because you have to define ‘hacking’ to a level of accuracy that is beyond your knowledge and may not even be completely specifiable even in theory. Anything less will simply cause the optimization to either stall completely or be rerouted through a different method, with the same end result. If you’re happy with that, then ok—but if the outcome is the same, I don’t see how you could rationally favor one over the other.
It is possible for a Lars to exist, and prefer not to change anything about the way he lives his life, and prefer that he prefers that, in a coherent, self-endorsing structure, and there be nothing you can do about it.
It is, of course, the last point that I am contending here. I would not be contending it if I believed that it was possible to have something that was simultaneously remotely human and actually self-consistent. You can have Lars be one or the other, but not both, AFAICS.
Once she has the situation completely under control, however, she has no excuses left—absolute power is absolute responsibility.
This is the problem I’m trying to point out—that the absolutely responsible choice for a FAI may in some cases consist of these actions we would consider unambiguously abusive coming from a human being. CelestAI is in a completely different class from humans in terms of what can motivate her actions. FAI researchers are in the position of having to work out what is appropriate for an intelligence that will be on a higher level from them. Saying ‘no, never do X, no matter what’ is notobviously the correct stance to adopt here, even though it does guard against a range of bad outcomes. There probably is no answer that is both obvious and correct.
I’m puzzled. I read you as claiming that your notion of ‘strengthening people’ ought to be applied even in a fictional situation where everyone involved prefers otherwise. That’s kind of a moral claim.
In that case I miscommunicated. I meant to convey that if CelestAI was real, I would hold her to that standard, because the standards she is held to should necessarily be more stringent than a more flawed implementation of cognition like a human being.
I guess that is a moral claim. It’s certainly run by the part of my brain that tries to optimize things.
(And as for “animalisticness”… yes, technically you can use a word like that and still not be a moral realist, but seriously? You realise the connotations that are dripping off it, right?)
I mainly chose ‘animalisticness’ because I think that a FAI would probably model us much as we see animals—largely bereft of intent or consistency, running off primitive instincts.
I do take your point that I am attempting to aesthetically optimize Lars, although I maintain that even if no-one else is inconvenienced in the slightest, he himself is lessened by maintaining preferences that result in his systematic isolation.
Well, assuming you mean “ai in an undiscernable facsimile of a human body” then maybe that’s so, and if so, it is probably a less blatant but equally final existential risk.
A reasonably competent FAI should be able to give you better … lovers. I’m not talking about catgirls, I’m talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.
You seem to have strange ideas about lovers :-/
Intellectual stimulation and emotional comfort plus some challenge basically means a smart mom :-P
I mentioned it in the Media thread. I don’t find the movie “fantastic”, just solid, but this might be because none of the ideas were new to me, and some of the musings about “what it means to be a person” has been a settled question for me for years now. Still, it is a good way to get people thinking about some of the transhumanist ideas.
I’ve been thinking about whether it’s a good idea to quit porn (not masturbation, just porn). Does anyone have anything to add to the below?
Reasons not to quit:
It’s difficult, which may cause stress and willpower depletion, though these effects would probably only be temporary.
It is pleasurable (i.e. valued just as a “fun” activity. This should be compared to alternative pleasurable activities, though, because any “porn time” can be replaced with “other fun things time”).
Reasons to quit:
It’s a superstimulus, and might interfere with the brain’s reward system in bad ways. http://yourbrainonporn.com/ has some evidence, though nothing as strong as, say, an RCT studying the effects of quitting porn.
Time. Any time spent viewing porn is time that could be spent doing other things (not necessarily “working,” but other relaxing/pleasurable activities which could have greater advantages. For example, reading fiction has the advantage that you can later talk about what you read with other people).
Possibility of addiction: I definitely don’t think I have a porn addiction, and I doubt I’m likely to progress to one, but obviously it’s possible anyway, and my own inside-view on that isn’t very safe to go on. From wikipedia:
A study found that 17% of people who viewed pornography on the Internet met criteria for problematic[clarification needed] sexual compulsivity.[9] A survey found that 20–60% of a sample of college-age males who use pornography found it to be problematic.[10] Research on Internet addiction disorder indicates rates may range from 1.5 to 8.2% in Europeans and Americans.[11] Internet pornography users are included in Internet users, and Internet pornography has been shown to be the Internet activity most likely to lead to compulsive disorders.[12]
I haven’t viewed porn for about 2 weeks and it hasn’t actually been that difficult, so I’m trying to decide whether I should just commit to quitting it completely. Right now I’m leaning toward quitting—viewing porn might be harmful, and it’s almost certainly not beneficial, so there’s a higher expected value from quitting for anybody who doesn’t assign much higher utility to the fun from porn than the fun from alternative activities.
For completeness, I should also mention the “nofap” movement. The anecdotes on there are the same sort of things you’d find when reading about homeopathy or juice fasts, though, so those can be mostly ignored.
1 and 2 apply to entertainment in general. There’s something to be said for cutting back on TV, aimless internet browsing, etc., but it makes more sense to focus on cutting back total time than eliminating one particular form of entertainment in particular.
As for 3, I’m not familiar with that particular study, but in my experience studies of “porn addiction” or “sex addiction” tend to rely on dubious definitions of “addiction.” I’d advise against taking worries of porn addiction any more seriously than worries of “internet addiction” or “social media addiction” or “TV addiction” or whatever.
I’d advise against taking worries of porn addiction any more seriously than worries of “internet addiction” or “social media addiction”
This sentence sounds like it’s intended to communicate “porn addiction shouldn’t be taken very seriously”. But speaking as someone who is hardly ever capable of staying offline even for a day despite huge increases in well-being whenever he is successful at it, to say nothing about the countless of days ruined due to getting stuck on social media, these examples make it sound like you were saying that porn addiction was an immensely big risk that was worth taking very seriously indeed.
I actually had what you’ve said about social media addiction in mind when I wrote that sentence. So like, if you’re losing entire days to porn, yeah, you have a problem. But if your experience is more along the lines of “have trouble not spending at least 15 minutes on porn each day,” I wouldn’t be more worried about that than “have trouble not spending at least 15 minutes on social media each day.”
Maybe 1 and 2 apply to entertainment in general, but I think there are a few things that make porn different:
I suspect porn is in some way “more” of a superstimulus than most other forms of entertainment. At least for me, it seems to tap into a more visceral response. I don’t know of any research about this, but that doesn’t mean I should ignore that intuition.
Many other forms of entertainment have plausible other benefits (albeit often minor). Reading fiction could plausibly improve your language ability and empathy. Gaming often has a social component or a skill-building component (even if that skill doesn’t transfer to anything else or only transfers to other games). TV and movies may have some similar benefits to reading. All of them have the advantage of giving you topics to discuss with other people, whereas socially discussing the last porn you watched is usually not a good idea.
In addition, “quit porn” may be an easier rule to follow than “cut back on superstimuli (but don’t quit any of them entirely).”
If you’re implying that my bottom line is already written, I don’t think that’s the case. Both of the points I made in response to ChrisHallquist were things that I had already thought of before he posted, so I wasn’t just searching for a rebuttal to his points.
If you’re implying that the arguments I’ve made seem to have already convinced me to quit...well, yes. That’s why I’m posting here: to find out whether there’s anything I’m missing.
How about you make specific predictions (written) of what will happen if you abstain for a specific number of months, then abstain for the given number of months, and then evaluate the original predictions?
For things like “clarity of mind”, find some way of measuring it. For things like “motivation” instead focus on what exactly you will be motivated to do.
Then compare with the same amount of time with porn.
It’s still very little data, but better than no data at all.
Less meta—I think it pretty much depends on what you replace it with. Which can be both someting better or something worse, and you probably don’t know the exact answer unless you try.
Porn gets me off quicker. That is it’s utility. When I’m self-pleasuring for enjoyment, I don’t watch it, because it’s more fun to use my imagination. However, when I’m sexually frustrated and can’t focus on what I want to focus on, pornography allows me to cut masturbation time down from 10-20 minutes to under 5. This is a great time saver, and allows me to spend my time more productively.
It is superstimulation, and if you come to rely on it to come or develop an addiction (arguably the same thing), then you’ll have a problem. But if it isn’t having any negative affect on your life, then why drop it?
I view claims of a sudden increase in mental energy or clarity of thought as… not red flags, exactly, but the sort of thing people tend to report with any intervention including placebo.
I have just started playing poker online. On Less Wrong Discussion, Poker has been called an exercise in instrumental rationality, and a so-called Rationality Dojo was opened via RationalPoker.com. I have perused this site, but it has been dormant since July 2011. Other sources exist, such as 2 + 2, Pokerology and Play Winning Poker, but none of them have the quality of content or style that I have found on Less Wrong. Is anyone here a serious poker player? Is there any advice for someone who wants to become a winning player themselves?
What is your goal? If you want to earn significantly more than (let’s say) $20,000 a year then poker is probably not your best bet. I used to play during 2007-2010 and the game were getting progressively tougher (more regulars, less fish), the same way as they had been in the prior few years before I started playing online. I recently checked how things are going and the trend seems to still be in place. Additionally, the segregation of countries in online poker (americans not being able to play with non-americans for example) is making things worse and this is in fact what drove me away mid-2010.
TL;DR You are several years too late to have a decent chance of making good money with poker.
Thank you for the heads up. I’ll keep it to more causal play. Do you have any experience to with brick and mortar poker? And what are you doing now if you are no longer (presumably) playing professionally?
Do you have any experience to with brick and mortar poker?
There are more fish live, sure. However since you can only play one table at a time and since you can only do about 30 hands a table, you will need to play at higher stakes and have a big enough bankroll.
And what are you doing now if you are no longer (presumably) playing professionally?
For the record I wasn’t making really big money back then or anything either (decent enough for the country I used to live in but that’s it). I work now and if you are looking for job advice, the ‘obvious’ one is programming.
Aside: Poker and rationality aren’t close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.
To the point: I don’t play poker online because it’s illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)
I’m significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn’t very good, I like my current job, and I am too old to play the 16-hour days of my youth.
General tips: Play a lot. To the extent that you can, keep track of your results. You need surprisingly large sample sizes to determine whether your really a winner unless you have a signature performance. (If you win three 70-person tournaments in a row, you are better than that class of player.) No-limit hold-‘em (my game of choice) is a game where you can win or lose based on a luck a lot of the time. Skill will win out over very long periods of time, but don’t get too cocky or depressed over a few days’ work.
Try to keep track of those things you did that were wrong at the time. If you got all your chips in pre-flop with AA, you were right even if someone else hits something and those chips are now gone. This is the first-order approximation.
Play a lot, and try to get better. If you are regularly losing over a significant period of time, you are doing something wrong. Do not blame the stupid players for making random results. (That is a sign of the permaloser.)
Know the pot math. Know that all money in the pot is the same; your pot-money amount doesn’t matter. Determine your goals: Do you want to fish-hunt (find weak games, kill them) or are you playing for some different goal? Maybe it’s more fun to play stronger players. Plus, you can better faster against stronger players, if you have enough money.
Finally, don’t be a jerk. Poker players are generally decent humans at the table in my experience. Being a jerk is unpleasant, and people will be gunning for you. It is almost always easier to take someone’s money when they are not fully focused on beating you. Also, it’s nicer. Don’t (in live games) slow-roll, give lessons, chirp at people, bark at the dealer, or any of that. Poker is a fun hobby.
Poker and rationality aren’t close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.
Poker teaches only a couple of significant rationality skills (playing according to probabilities even when you don’t intuitively want to; beating the sunk-cost fallacy and loss aversion), but it’s very good at teaching those if approached with the right mindset. It also gives you a good head for simple probability math, and if played live makes for good training in reading people, but that doesn’t convert to fully general rationality skills without some additional work.
I’d call it more a rationality drill than a rationality exercise, but I do see the correlation.
(As qualifications go, I successfully played poker [primarily mid-limit hold ’em] online before it was banned in the States. I’ve also funded my occasional Vegas trips with live games, although that’s like taking candy from a baby as long as you stay sober—tourists at the low-limit tables are fantastically easy to rip off.)
Poker also requires the skill of identifying and avoiding tilt, the state of being emotionally charged leading to the sacrifice of good decision-making. A nice look of the baises which need to be reduce to play effective poker can be found at Rationalpoker.com.
I suppose poker is more of a rationality drill than exercise, and just a physicist may be successful in his field while having a broken personal life, so may a poker player fall to the same trap.
Excellent post. Thank you for the detailed response.
Right now, I have been struggling with calculating pot odds and implied odds. I grasp what they are conceptually, but actually calculating them has been a bust thus far. Is there any guidence you could give with this?
As far as legality in the US, I am playing in the state of Delaware with one of thier licensed sites, so I think I am in the clear. The play is very thin though, and I am looking to make my way to the brick and mortars in Alantic City to see if it will be a good sandbox to become better.
I’m seeing a lot of things claiming that over the long run, people can’t increase their output by working much more than 40 hours per week. It might (so the claim goes) work for a couple weeks of rushing to meet deadline, but if you try to keep up such long hours long-term your hourly productivity will drop to the point that your total output will be no higher than what you’d get working ~40 hour weeks.
There seem to be studies supporting this claim, and I haven’t been able to find any studies contradicting it. On the other hand, it seems like something that’s worth being suspicious of simply because of course people would want it to be true. Also, I’ve heard that the studies supporting this claim weren’t performed until after the 40 hour work week had become entrenched for other reasons, which seems suspicious. Finally, if (salaried) employees working long hours is just them trying to signal how hard working they are, at the expense of real productivity, it’s a bit surprising managers haven’t clamped down on that kind of wasteful signaling more.
(EDIT: Actually, failure of managers to clamp down on something is probably pretty weak evidence of it not being wasteful signaling, see here.)
This seems like a question of great practical importance, so I’m really eager to hear what other people here think about it.
Well, it’s quite unlikely that 40 hours/week is exactly the right value. I’d expect that what’s going on involves researchers comparing the cultural default to a grab-bag of longer hours, probably with fairly coarse granularity, and concluding that the cultural default works better even though it might not be an absolute optimal.
There’s also cultural factors to take into account, both local to the company and general to the society. If we’ve habituated ourselves to thinking that 40 hours/week is normal for people in general, it wouldn’t be surprising to me if working longer hours acted as a stressor purely by comparison with others. Similarly, among companies, expecting employees to work longer hours than the default would probably correlate with putting high pressure on them in other ways, and this would probably be very hard to untangle from the productivity statistics.
Finally, if (salaried) employees working long hours is just them trying to signal how hard working they are, at the expense of real productivity, it’s a bit surprising managers haven’t clamped down on that kind of wasteful signaling more.
I’m not sure that “X is wasteful signaling and hurts productivity” is very strong evidence for “managers would minimize X”.
One manager I used to work for got in some social trouble with his peers (other managers in the same organization) for tolerating staff publicly disagreeing with him on technical issues. In a different workplace and industry, I’ve heard managers explicitly discuss the conflicts between “managing up” (convincing your boss that your group do good work) and “managing down” (actually helping your group do good work) — with the understanding that if you do not manage up, you will not have the opportunity to manage down.
A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.
A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.
Localized context warning needed missing here.
There’s also other warnings that need to be thrown in: People who only care about the social-ape aspects are more likely to seek the position. People in general do social-ape stuff, at every level, not just manager level, with the aforementioned selection effect only increasing the apparent ratio. On top of that, instances of social-ape behavior are more salient and, usually, more narratively impactful, both because of how “special” they seem and because the human brain is fine-tuned to pick up on them.
Another unstudied aspect, which I suspect is significant but don’t have much solid evidence about, is that IMO good exec and managerial types seem to snatch up and keep all the “decent” non-ape managers, which would make all the remaining ape dregs look even more predominant in the places that don’t have those snatchers.
But anyway, if you model the “team” as an independent unit acting “against” outside forces or “other tribes” which exert social-ape-type pressures and requirements on the Team’s “tribe”, then the manager’s behavior is much more logical in agent terms: One member of the team is sacrificed to “social-ape concerns”, a maintenance or upkeep cost to pay of sorts, for the rest of the team to do useful and productive things without having the entire group’s productivity smashed to bits by external social-ape pressures.
I find that in relatively-sane (i.e. no VPs coming to look over the shoulder of individual employees or poring over Internet logs and demanding answers and justifications for every little thing) environments with above-average managers, this is usually the case.
I’m seeing a lot of things claiming that over the long run, people can’t increase their output by working much more than 40 hours per week.
I think this is just false. It seems to me that lots of people work long hours throughout their entire career, with output much higher than if they only worked 40 hrs/wk. But I haven’t looked into studies.
However, I worry that while there seem to be lots of people who work long hours throughout their career and are much more successful than most people are as a result, I wonder how much of this is those people having higher output, and how much is those people becoming successful through signaling.
In accordance with what others say, I have seen plenty of smart managers who inexplicably value longer hours over better work output. My guess is that someone going home earlier offends their internal concept of fairness. That’s one reason productive people do better on fixed price contracts than on a salary.
Based on personal experience AKA anecdotal evidence w/o even quantitative verification, for what it’s worth:
I think the optimal point depends (significantly) on the person, the job and the work environment
For me, 45-50 hours a week seems efficient, most of the time
Regarding managers not clamping down on wasteful signaling: I don’t think it’s strong evidence, because of course managers would want the opposite to be true. For them making employees work more hours feels like the simplest way to get the project back on schedule (and the project is always behind schedule).
The answer hugely depends on how intensely you work. Using hours as a measure of your productivity is a bit pointless I think. It also matters how the work is distributed in time and what kind of work we’re talking about.
I can work all day without my productivity suffering if I take it easy enough, but I can exhaust myself in a few hours too if I work super intensely. Increasing work intensity produces diminishing marginal utility for me. Also the fact that I’ve accomplished much in the few hours isn’t much solace if I’m too exhausted to enjoy anything for the rest of the day.
The points made in Philosophical Investigations—namely that a lot of philosophical problems come down to confusions about language—seems to be interesting and correct to me: but really, did no one before Wittgenstein think about this? I mean, if I read Russell, it seems that he had a similar brand of clear thinking going on. I’m sure various strains of Traditional Rationality were around much before Wittgenstein.
Or is it only because I’m living in the post-Wittgenstein world that I feel that this is relatively obvious?
However that doesn’t answer Stabilizer’s curiosity about which ideas were really brought by him, how his ideas about confusions on language compare to those of Russel, etc. I’m also interested in knowing :)
(I’ve read the Philosophical Investigations but not Russel, and don’t have a clear idea of the history of ideas in that domain)
Our recently elected minister for finance just did something unexpected. She basically went:
“Last autumn during the election campaign, I said we should do X. After four months of looking at the actual numbers, it turns out that X is a terribad idea, so we are going to do NOT X”
(She used more obfuscating terms, she’s a politician after all.)
The evidence points to her actually changing her mind rather than lying during the election.
The question:
Would you prefer a politician sane enough to change her mind when presented with convincing evidence or one that you (mostly) agree with?
My preference is for politicians who I broadly ideologically agree with, who are capable of doing what you described.
I expect that if one I did not broadly ideologically agree with did what you describe, I would think of them as a weasel, or first consider the hypothesis they were preparing to fuck over all that was good and right in some manner I had not yet figured out. (I realise this is defective thinking in a number of ways, but that would in fact be my first reaction.)
Both. But they should change their mind before an election, not after. If they made the speech you quoted what I would hear is “X is the right thing to do, so I promised you X, but now that I have my mitts on some real power, not X is better for me, so I will do not X”
If I can trust them to actually be changing their mind when presented with evidence, and not just lying, and listening for any further arguments from the side they started on (presumably mine for purposes of this question), the former.
It’s at least commonly accepted that alcohol kills brain cells—is there a study that actually links a certain amount of drinking to a certain amount of IQ points lost?
The relationship between alcohol use and cognitive function appears to be nonlinear, and indeed non-monotonic: light drinkers have better cognitive performance than nondrinkers. Reduction in cognitive performance for heavy drinkers is measured more in men than in women.
Source: Rodgers et al (2005), “Non-linear relationships between cognitive function and alcohol consumption in young, middle-aged and older adults: the PATH Through Life Project” — http://www.ncbi.nlm.nih.gov/pubmed/16128717
Chronic alcoholics do not have reduced numbers of neocortical neurons, but do have reductions in white matter volume.
Neither of these studies speaks about the specific measurement you’re asking for, IQ, but they do address the general topic.
(Chronic alcoholism is also associated with specific neurological conditions such as Wernicke-Korsakoff syndrome, which is caused by thiamine deficiency — someone who’s getting most of their calories from booze is not getting enough nutrition.)
People usually abstain for reasons that might affect cognitive performance like depression or previous substance abuse for example.
Reduction in cognitive performance for heavy drinkers is measured more in men than in women.
They note that:
After adjustment for education and race, male hazardous/harmful drinkers no longer performed significantly less well than light drinkers, whereas male and female abstainers and occasional drinkers still did so.
-
Chronic alcoholism is also associated with specific neurological conditions such as Wernicke-Korsakoff syndrome, which is caused by thiamine deficiency — someone who’s getting most of their calories from booze is not getting enough nutrition.
Alcoholism can also reduce thiamine absorption as much as 50 % in people who aren’t malnourished.
I did a few Medline searches some time ago and the answer appeared to be no. Since then I’ve done enough self quantification (mostly with Anki) to know that sleepless nights and even slight hangovers severely damage my abilities for several days. I was unaware of this effect before measuring my performance. Even small amounts of alcohol damage my sleep, and you could probably find studies that conform to this observation. This knowledge slowly creeped on me to seem actionable enough that further searching for studies felt like a desperate attempt to rationalize self sabotage.
Measure your performance. Temporary effects are not a direct answer to your question, but might be sufficient knowledge for decision making.
Update on the Sean Carroll vs William Lane Craig debate mentioned earlier: Sean Carroll outlines his goal:
Just so we’re clear: my goal here is not to win the debate. It is to say things that are true and understandable, and establish a reasonable case for naturalism, especially focusing on issues related to cosmology. I will prepare, of course, but I’m not going to watch hours of previous debates, nor buy a small library of books so that I may anticipate all of WLC’s possible responses to my arguments. I have a day job, and frankly I’d rather spend my time thinking about quantum cosmology than about the cosmological argument for God’s existence. If this event were the Final Contest to Establish the One True Worldview, I might drop everything to focus on it. But it’s not; it’s an opportunity to make my point of view a little clearer to a group of people who don’t already agree with me.
Sean’s goal to “make my point of view a little clearer to a group of people who don’t already agree with me” is certainly achievable. Whether it is a good one to strive for (by whatever metric of goodness) is less clear. Certainly there is little chance of him changing the views of WLC or anyone else in that camp. Likely the debate itself is its own intrinsic reward. It would be interesting to compare the stated motivation of the previous debaters and whether they think that the exercise was worthwhile in retrospect.
Sean’s goal to “make my point of view a little clearer to a group of people who don’t already agree with me” is certainly achievable. Whether it is a good one to strive for (by whatever metric of goodness) is less clear.
Good catch, thanks—Craig is not in fact a creationist.
Going back to the original question, though, I think such viewpoint-cracking is what Carroll is going for. I wouldn’t like to guess his chances of success—Craig is really good in public debating—but I do think that’s his intended effect, and that he thinks it’s worth it.
How much is it worth spending on a computer chair? Is a chair for both work and play (ie video games) practical, or is reclining comfort necessarily opposed to sit-up comfort?
In an attempt to simplify the various details of the cost-benefit calculations here:
If you spend:
1-2 hours on this chair per day: Might be worth spending some time shopping for a decent seat at Staples, but once you find something that fits and feels comfortable (with some warnings to take in consideration), pretty much go with that. You should find something below 100$ for sure, and can probably get away with <60$ spent if you get good sales.
3-4 hours / day: If you’re shopping at Staples, be more careful and check the engineering of the chair if you’ve got any knowledge there. Stuff below 60$ will probably break down and bend and become all other sorts of uncomfortable after a few months of use. If your body mass is high, you might need to go for solidity over comfort, or accept the unfair hand you’re dealt and spend more than 150$ for something that mixes enough comfort, ergonomy and solid reliability.
More than 4 hours / day on average: This is where the gains become nonlinear, and you will want to seriously test and examine anything you’re buying under 150$. At this point, you need to consider ergonomics, long-term comfort (which can’t be reliably “tested in store” at all, IME), reliability, a very solid frame for extended use that can handle the body’s natural jiggling and squirming without deforming itself (this includes checking the “frame” itself, but also any cushions, since those can “deflate” very rapidly if the manufacturer skimped there, and therefore become hard and just as uncomfortable as a bent chair), and so on. At this point, the same advice applies as shopping for mattresses, work boots, or any other sort of tool that you’re using all day every day. It’s only at this point where the differences between more relaxed postures, “work” postures and “gaming” postures starts really mattering, and I’d say if you actually spend 6-8 hours per day on average on this chair, you definitely want to go for the best you can get. How much that needs to cost, unfortunately, isn’t a known quantity; it depends very heavily on your body size, shape, mass, leg/torso ratio, how you normally move and a bunch of other things… so there’s a lot of hit-and-miss, unfortunately, unless you have access to the services of a professional in office ergonomics. Even then, I can’t myself speak for how much a professional would help.
It makes a difference—I now have a good, high quality chair that cost over 250€ (not from my own pocket) and it’s close to perfect—I can recline it to a comfortable position that is not possible with an “ordinary office” chair (I used to break them down on a regular basis). Despite being advertised as “super-resistant”, this one already broke twice (covered by warranty). And when I had to sit on an “ordinary office” chair, I found out that I cannot work for more than an hour or two before I get serious pain in my back—this seems to be related to the monitor being beneath the eyes and the inability to recline—I (like to) have the monitor exactly at the eye level and looking slightly upwards.
As far as mattresses go, it’s important to note that it’s not all about price. When I read a guide by a German consumer advice group they made the point that it’s important to actually test the mattress in person to see how it fits your individual preferences.
Before of Other Optimizing here. You’re going to see a lot of “This mattress is the best thing I’ve ever slept on!,” and it may not be the case for you. Second Christian’s advice to actually go into a store and sleep on a mattress.
I bought this and it’s amazing. I was sleeping on a $900 spring mattress, and this is so much better in every respect. It’s held up for 1.5 years, now, and is just as nice as the day I got it.
That looks really nice. Makes me want to research durability some more and to compile a list of things to spend money on, inspired by the recent post on a similar topic.
My father is one of the patent examiners for mattresses. I brought him along the last time I bought a mattress. His recommendation was like ChristianKl’s: try different mattresses and see what’s comfortable. Cost and comfortableness are not necessarily related. Whether or not you find it comfortable in the store is the best indication of whether you’ll find it to be comfortable at home. Pick the cheapest one you find comfortable. With that being said, you might find some more expensive mattresses last longer, though he indicated that most mattresses are designed to wear out around the same time. Also, he’s highly skeptical of the value of memory foam and other things you see on TV, so don’t think those things are necessarily better.
For what it’s worth, he sleeps on a waterbed. I am unsure, but I think the choice might be motivated by my mother’s allergies; waterbeds can’t absorb allergens by their design.
Mattresses aren’t the only thing you can sleep on. I’d consider picking up and installing a hammock—they’re not only cheap (~$100 for a top of the line one, $10 and 2 hours for making your own), but they also give you significantly more usable living space.
You can always have a hammock in addition to, rather than instead of, a traditional bed. Or you can use the next-best piece of furniture for that purpose.
It doesn’t take that much to get a memory foam mattress these days, and I get the impression it’s totally worthwhile. (I’ve had my Tempur-Pedic for a bit over 3 years now, and enjoy it quite a bit. I noticed, among other things, that I then started thinking of hotel beds, even in nice hotels, as bad.)
Not an answer, but I did discover kneeling chairs, because I am also in the market for a new chair. I’d try one with back support, but none of the reviews of the products on amazon compel me to make any purchases.
Beware that you need to “try” these chairs, and you need to pay attention to clothing when you try them too. A chair that’s super comfortable with jeans and a winter coat might turn out to be an absolutely horrible back-twisting wedge of slipperiness once you’re back home in sweatpants and a hoodie. Or in various more advanced states of undress.
In practice, this is relevant once you’ve already bought a chair and want to maximize the comfort you can get from it, balanced against the difference of comfort you could buy & chance of getting that comfort (or some lower value, or some higher) & money you’d need to spend.
When purchasing a new chair, I don’t think this will be an important factor in the overwhelming majority of situations.
It would be convenient if, when talking about utilitarianism, people would be more explicit about what they mean by it. For example, when saying “I am a utilitarian”, does the writer mean “I follow a utility function”, “My utility function includes the well-being of other beings”, “I believe that moral agents should value the well-being of other beings”, or “I believe that moral agents should value all utility equally, regardless of the source or who experiences it”? Traditionally, only the last of these is considered utilitarianism, but on LW I’ve seen the word used differently.
Right. Many people use the word “utilitarianism” to refer to what is properly named “consequentialism”. This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn’t really work mathematically, if anyone wants me to explain further, I’ll write a post on it.)
But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.
it doesn’t really work mathematically, if anyone wants me to explain further, I’ll write a post on it.
Please do. I think it also would be valuable to refresh people’s memories of the difference between utilitarianism and consequentialism, and to show many moral philosophies can fall under the latter.
Many people use the word “utilitarianism” to refer to what is properly named “consequentialism”.
I tend to do that.
What is the difference? According to Wikipedia, Egoism and Ethical Altruism are Consequentialist but not Utilitarian. I think it might have something to do with your utility function involving everyone equally, instead of ignoring you or ignoring everyone but you.
I strongly feel that true utilitarianism is a decoherent idea (it doesn’t really work mathematically, if anyone wants me to explain further, I’ll write a post on it.)
Because of interpersonal utility comparisons, or what? That might affect some forms of preference utilitarianism. Hedonistic and “objective welfare” varieties of utilitarianism seem like coherent views to me.
After reading this main post, it dawned on me that the scary sounding change terminal goals technique is really similar to just sour grapes reasoning plus substituting them with lower hanging grapes, that would eventually get you to the higher hanging grapes you originally wanted.
I typically refrain from deluding myself to think that I don’t want what is hard to attain, because I know I really do want it. With sour grapes reasoning I can pretend to not want my original goal as much as I now want another more instrumental goal. I feel like this helps me cope and be more productive, instead of frustrating myself with hard to define terminal goals.
At first I thought that changing terminal goals would be kind of a hard mind hack to put to use, but now that I think about it, it’s actually quite easy to carefully delude myself. This hack doesn’t have to just apply to lofty terminal goals, it can apply to goals that are just simply not in your locus of control, like getting the job or making the team. Didn’t get that internship? “Pfft, I didn’t really want it anyway, I really just want to practice and learn these skills to be an awesome programmer.” Didn’t make the cut for this year’s team? Pfft, I really just want to have an awesome crossover and sweet jump shot.”
“A community blog devoted to refining the art of human rationality”
Of course people will be drawn to this site: Who does not want to be rational? Skimming around the topics we see that people are concerned with how to make more money, calculating probabilities correctly and to formalise decision making processes in general.
Though there is one thing that bothers me. All skills that are discussed are related to abstract concepts, formal systems, math. Or in general things that are done more easily by people scoring high on g-heavy IQ tests. But there is a whole other area of intelligence: Emotional intelligence.
I seldom see discussions relating to emotional intelligence, be it techniques of CBT, empathy or social skills. Sure, there is some, but far less than there is of the other topic. How do I develop empathy? How do I measure EQ? Questions that are not answered by me reading LessWrong.
Off the top of my head, some good top-level posts touching on this area: How to understand people better (plus isaacschlueter’s particularly good comment) and Alicorn’s Luminosity sequence. Searching gives maybe a partial match for How to Be Happy, which cites some studies on training empathy and concludes that little is scientifically known about it—still, I think a top-level post on what is known would be welcome. Swimmer963′s post on emotional-regulation research is nice.
CFAR also places more explicit emphasis on emotional awareness, and that sometimes comes up in the group rationality diaries.
I think one reason that these topics are relatively neglected is that people seem to develop social skills and emotional awareness in pretty idiosyncratic ways. Still, LW seems to accept more personal accounts, like this post on a variation on the CBT technique of labeling. So it seems worthwhile to post things along those lines.
I agree, there is alot of talk about mathematics and formal systems. There is big love for Epistemic Rationality, and this is shown in the topics below. Some exceptions exist of course, a thread about what type of chair to buy stands out.
But I agree, Emotional Intelligence is a large set of skills underappreciated here, and I admit though I have some knowledge to share on the subject, I do not feel particularly qualified to write a post on it.
But I agree, Emotional Intelligence is a large set of skills underappreciated here, and I admit though I have some knowledge to share on the subject, I do not feel particularly qualified to write a post on it.
I wonder how many people we have that are knowledgeable on that subject. Maybe those who feel qualified to write such a post feel intimidated to do so. In that spirit I encourage you to start the tide and write about what you think is important.
I got a lot better at empathy from actively trying to understand people in contexts that 1) I wasn’t emotionally tied up in, 2) were challenging, and 3) had concrete success/failure criteria. It is a fun game for me.
The way I did this was to gather up a group of online contacts and when they’d have issues like “I want to be more confident with women” or “I want to not be afraid of speaking in class” I’d try to understand it well enough that I could say things that would dissolve the problem. If the problem went away I won. If it didn’t then I failed. No excuses.
I’ve gotten a lot better and it has been a pretty perspective changing thing. I’m quite glad I did it.
I don’t really see the point. On the first page of Discussion there currently “On Straw Vulcan Rationality” with is about the relation of rationality to emotions which has a lot to do with emotional intelligence.
There also “Applying reinforcement learning theory to reduce felt temporal distance”, “Beware Trivial Fears”, “How can I spend money to improve my life?” and “How to become a PC?”.
I think “On Straw Vulcan Rationality” illustrates the issue well. Here on Lesswrong there are people who actually think that Vulcans do things quite alright. In an environment where it’s not clear that one shouldn’t be a Vulcan it’s difficult to communicate about some aspects of emotional intelligence.
Recently asked for ways to find a career for himself but it it all in the third person instead of the first. My post suggesting that he should change to first person was voted down because it was to far out of LW culture.
If I’m around people who do a lot of coaching changing someone who speaks in third person about his own life to first person to increase his agentship is straightforward advice. It’s a basic.
I had experience where encouraging a person to make that change produced bodylanguage changes that are visible to me because the person is more associated with themselves. On the other hand I’m hardpressed if you ask me for peer reviewed research to back up my claim that it’s highly useful to use the first person when speaking about what one wants to do with his life.
Not being able to rely on basics makes it hard to talk when on Lesswrong we usually do talk about advanced stuff.
I see your comments are downvoted quite often. They sometimes contain some element of emotion or empathy. If it was possible to view down- and upvotes seperately you’d see that my post garnered quite some downvotes, meaning that there actually are quite a few people who either think that the topic is well covered by LW or it does not have a place on LW. I obviously disagree with both positions.
You say you don’t see the point of doing this here on LW, can you then point me to a site where they ‘start at the basics’? I refuse to give in to the meme “being a Vulcan is perfectly fine”.
You say you don’t see the point of doing this here on LW
No, in that case I wouldn’t write the comments that are downvoted. I do have a bunch of concepts in my mind that I can use to do stuff in daily life. But my understanding is not high enough at the moment to reach academic levels of scrutiny.
I do have a bunch of mental frameworks from different context that I use. My main framework at the moment is somato-psychosomatic. From that framework there nothing published in English. But even if you could read German or French and read the introductory book I doubt it would help you. The general experience is that people who don’t have in person experience with the method don’t get the book.
Books are usually limited in teaching emotional intelligence. I have heared that there are good self study books for cognitive behavior therapy but I don’t have personal experience with them.
Next I do recommend mediation. It builds awareness of your own state of mind.
I would recommend a teacher but if you just want to do it on your own I would recommend a meditation where you focus on something within your own body like your breath.
If you are a beginner I would recommend against meditating by focusing on an object that’s external to your body. As far as sitting position goes, sitting still in a chair does it’s job. For beginners I would recommend against laying down.
Taking different positions does have effects but if you think that meditation is about sitting in lotus position, you focus on the wrong thing.
Emotions are something that happens in your own body. People usually feel emotions as something that moves within their own body.
But you also need some cognitive categorization to have an emotions. Fear and anticipation are pretty similar on a physical level but have other attached meaning. The meaning makes us enjoy anticipation and not enjoy fear.
Both the meaning as well as the physical level are points of intervention where one create change.
If I personally have an emotion I don’t want to have I strip it of meaning and resolve it on the physical level. I think I do that through using qualia that I learned to be aware of while doing meditation.
When talking in person it’s possible to see body language changes to verify whether someone switching to being aware of his emotion. It’s on the other hand nearly impossible through this medium to get an idea of what qualia other people on lesswrong have at a particular moment in time.
Someone suggests that planning is bad for success. There is very little research cited, however (there is one study involving CEOs). Is there more confirming / invalidating evidence for this idea somewhere?
Marine biologist Mike Graham, for example, was giving a lecture to Finding Nemo’s (2004) animators when the director asked him “if there was one thing that the fi lm might get wrong that would really disturb him.” An account of this meeting in Nature shows how Graham’s answer created a predicament for the animators: “Quick as a fl ash, Graham said the most intolerable outrage would be to see kelp — a type of seaweed that only grows in cold waters — depicted in a coral reef. There was an uncomfortable shuffling in the audience. Then a voice from the back called out: ‘Better not go see the movie then.’ But if you check out your video or DVD, you’ll see there is no kelp. After Graham raised his objections, every frond was carefully removed from each scene, at considerable cost.”
Filmmakers in the 1950s and 1960s… had to rely on propane tanks to mimic the exhaust coming off a rocket in space. When gas leaves the tank it “curls” as the atmosphere causes it to form vortices. In a vacuum gas does not behave in this manner, so these films were inaccurate in this respect. During production for Deep Impact, Chris Luchini explained this to the propmakers regarding the rocket exhaust as well as the comet ’ s outgassing. Liquid nitrogen helped them get around this problem for the rocket exhaust, but for safety reasons they were unable to utilize this for outgassing jets. When Luchini saw a rough cut of the film he noticed the curling of the gas off these jets. He mentioned this error to a special effects technician who used a CGI wipe effect to remove the curling days before the film’s premiere. Such a fix would have been impossible prior to the development of CGI technologies. Although CGI work can be expensive and difficult, it is often easier and cheaper to fix scientific inaccuracies during postproduction than it would be to struggle with them during production. In this case, they were able to rectify an error days before the release of the film.
On LW Wiki editing: in addition to the usual spam, I occasionally see some well-meaning but marginal-quality edits popping up on the side bar. I understand that gwern cleans up the spam, but does anyone have the task of checking bona fide edits for quality?
To be a Bayesian in the purest sense is very demanding. One need not only articulate a basic model for the structure of the data and the distribution of the errors around that data (as in a regression model), but all your further uncertainty about each of those parts. If you have some sliver of doubt that maybe the errors have a slight serial correlation, that has to be expressed as a part of your prior before you look at any data. If you think that maybe the model for the structure might not be a line, but might be better expressed as an ordinary differential equation with a somewhat exotic expression for dy/dx then that had better be built in with appropriate prior mass too. And you’d better not do this just for the 3 or 4 leading possible modifications, but for every one that you assign prior mass to, and don’t forget uncertainty about that uncertainty, up the hierarchy. Only then can the posterior computation, which is now rather computationally demanding, compute your true posterior.
Since this is so difficult, practitioners often fall short somewhere. Maybe they compute the posterior from the simple form of their prior, then build in one complication and compute a posterior for that and compare and, if these two look similar enough, conclude that building in more complications is unnecessary. Or maybe… gasp… they look at residuals. Such behavior is often going to be a violation of the (full) likelihood principle b/c the principle demands that the probability densities all be laid out explicitly and that we only obtain information from ratios of those.
So pragmatic Bayesians will still look at the residuals Box 1980.
As a counterargument to my previous post, if anyone wants an exposition of the likelihood principle, here is reasonably neutral presentation by Birnbaum 1962. For coherence and Bayesianism see Lindley 1990.
Edited to add: As Lindley points out (section 2.6), the consideration of the adequacy of a small model can be tested in a Bayesian way through consideration of a larger model, which includes the smaller. Fair enough. But is the process of starting with a small model, thinking, and then considering, possibly, a succession of larger models, some of which reject the smaller one and some of which do not, actually a process that is true to the likelihood principle? I don’t think so.
I published an article titled The Singularity and Mutational Load in h+ magazine about using eugenics to increase intelligence by reducing harmful mutations. The best way to create friendly AI might be to first engineer super-genius into a few of our children.
I had the word “eugenics” in the title but the editor took it out because of a possible negative reaction. (It’s standard practice for editors to change titles so he certainly did nothing wrong.)
That’s rather like a premise from Heinlein’s Beyond This Horizon, which is not an argument against it, just a historical note.
The idea seems plausible enough to be worth testing in animals. I don’t feel very sure about how much it would contribute to a positive Singularity, but it also doesn’t sound like it would increase risk significantly.
Approximately how long do you think it will take for reducing mutational load to come into common use?
I think our ability to keep mice confined in labs is up to the challenge, even very healthy and relatively intelligent mice......um, except for the risk of lab staff taking the mice home for pets or to win mice shows or to sell to journalists or something.
Even if the mice get out due to the staff having excessive mutational loads, I think you’d get rapid reversion to the mean when the edited mice bred with wild mice.
Lab mice’s brains are noticeably smaller than those of wild mice, primarily because they are horrifically inbred (and need to be for a lot of the genetic experiments to work properly).
There are similar issues with most of the lab organisms. My lab yeast that have been grown continuously in rich media with odd population structure (lots of bottlenecks) since the eighties have about a third the metabolic rate of wild isolates, and male nematodes of the common laboratory strains can hardly mate successfully without help.
Actually you don’t technically help them mate, you just make a strain that can’t reproduce via hermaphrodites self-fertilizing. You keep the males from being out-bred that way.
C. elegans has male and hermaphrodite sexes, not male and female. The hermaphrodites self-fertilize slowly to produce a few hundred hermaphrodite offspring, while mating with a male gives them many times as many offspring with half being male. But the lab-bred males are so bad at mating that even if you have a population that’s half male, they get massively outbred by the hermaphrodites selfing, and over a very few generations maleness just falls out of the population. You’ll wind up with about 0.1% of the population being male in the equilibrium due to the occasional hermaphrodite egg dropping an X chromosome during development (no Y chromosomes in this species, males just have one X), but they are continually diluted out by the hermaphrodites.
What you do is breed in a genetic change that makes the hermaphrodite’s sperm fail without affecting the male’s sperm, preventing selfing from producing any offspring. The occasional successful male mating is productive enough that they can still on average replace themselves and their partner and then some, it just has a much longer doubling time and thus when in competition with selfing gets diluted out.
More recent wild isolates can still mate well (and also show a lot of interesting social behavior you don’t see in the long-established lab strains) and their populations remain just under half male for a long time. Dunno what happens when you let the two populations mix.
EDIT: and just so you know, I upvoted ‘very carefully’
or even worse create smart, fast breeding creatures.
Not a problem. Keep in mind that if you let creatures without mutational load breed naturally, the amount of mutational load will increase until it reaches equilibrium.
There is a mismatch when you cite Shulman-Bostrom for 1 in 10 selection raising IQ by 10 points. Most of your article is about mutational load and how you don’t need to understand the role of any particular mutation to know how to correct it, but that paper assumes knowledge of how genes affect IQ.
This is why the sentence that links to the Shulman-Bostrom estimate contains the qualification “(not just for minimizing mutational load)” . An estimate that just considered mutational load would have been better, but I didn’t have one.
As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.
I also would like to reiterate what I said on PredictionBook: I don’t think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.
Recently there were afewposts about using bikes as transportation. This left me curious. Who are the transportation cyclists at LessWrong? I am interested in hearing your reasons for choosing cycling and also about your riding style. Do you use bike infrastructure when available? Do you take the lane? I’m especially interested in justification for these choices, as choices in the vehicular cycling (criticism of vehicular cycling) vs. separate bike infrastructure debate don’t seem to always be well justified. (To outsiders, vehicular cyclists might be considered the contrarians among bicyclists.)
I ride in the bike lane most of the time, in the left half of it to be out of range of car doors. Depending on traffic, I often take the lane before intersections to avoid right-hook collisions. (My state’s driver’s handbook is pretty clear on drivers being required to merge into the rightmost (bike) lane before turning right but hardly anyone actually does this.) I also take the lane when making a left turn, and when there isn’t actually room for someone to pass me safely on the left but there might be room for a poor driver to think he can do so.
I don’t use bike paths much because (a) separate bike infrastructure doesn’t go most places I want to go and (b) when it does go where I want to go, separate bike infrastructure is often infested with headphone-wearing joggers who can’t hear my bell so I have to go very slowly or weave between them. When the joggers aren’t too numerous (e.g. if it’s raining) I do enjoy bike paths for recreation though.
I started biking for transportation when a friend gave me a bike that had been sitting in her basement for a year gathering dust. It turned out to be as fast as taking the bus and also a lot cheaper. I had a low income at the time, so frugality was a huge motivation, but it turned out to be fun as well. There’s also a great feeling of freedom in not having to check the bus schedule before you go somewhere. (For various reasons car ownership is not a viable option for me, though I’m thinking of getting a zipcar membership.)
My first transportation bike was a 40lb mountain bike, but when I moved to a hilly city this year the weight was a problem. I didn’t shop around much for a replacement, just got the first road-bike-like-thing I found at a garage sale. It has upright handlebars but otherwise appears to be a standard road bike (except for being 40 years old and French and having nonstandard bolt sizes, but what do you expect from a yard sale?) and I’m very happy with it. I can go straight up hills where I used to have to get off and walk.
I suppose I am getting health benefits from biking, or at least it seems to be getting easier with time, but exercise isn’t really a goal for me. I rarely bike fast enough to get tired or out of breath.
I cycle as my main form of transport around where I live (in the UK, so a bunch of this may be weird to you US people). Most common journey is to work and back (~1.5 miles, takes me about 10 minutes on the way there and 15 or so on the way back due to hills). I do this every weekday and also cycle to leisure/hobby locations, supermarket, etc.
Reasons for choosing cycling:
Habit. It’s been my main form of transport for about 6 years now and I cycled a fair bit before that too.
It’s free other than the initial cost of the bike (and I would want to own a bike even if it wasn’t my main form of transportation) and occasional maintenance costs. Overall, over the lifetime of the bike, unbeatably cheap.
It’s a lot quicker than walking (especially downhill!). It’s also a lot quicker than driving over the short distances that I mostly cover, on roads that are often blocked up with traffic that I can easily cycle around. On most of the routes I regularly cycle, it’s far quicker than any of the public transport options too, especially if you count waiting time.
It’s a lot better for the environment than driving.
It’s a good way to incorporate a little bit of extra activity into my day.
It’s easy to park a bike, virtually anywhere, for free. Most places I cycle to are in the middle of a city and parking the car there would be either prohibitively expensive or, more likely, impossible.
It’s flexible. I can jump on my bike at a moment’s notice and go from door to door rather than having to faff around defrosting the car, checking that it has petrol, finding somewhere to park, etc etc, or waiting for a bus.
If I’m lost, it’s dead easy to stop at the side of the road and check where I’m trying to get to, and I can walk back along the pavement if it turns out I’m on the wrong track. These things are often not easy when driving!
I enjoy the opportunity to spend a little bit of time outdoors just about every day; I feel it creates a nice gap between activities/work/etc. Of course I moan like crazy about this when it rains heavily, but I still do it.
I certainly do cycle on bike paths where they’re available, but nearly all my regular routes are just on primarily residential streets. Sometimes there are bike lanes in the road, which is fine and obviously I ride in them, but it doesn’t make me feel that much safer as they are shared with buses and often contain parked cars that are liable to open their doors without warning. Depending on the type of road, the situation, and the turn I’m about to take next, I either ride most of the way over to the left (staying out of cars’ way but not rubbing right up against the kerb, and looking ahead to pull out around a parked car if necessary) or take the lane (if there’s not room for a car to reasonably overtake me, if I’m riding at/near the speed limit on a steep downhill, if I’m about to turn right).
I generally feel fairly safe while cycling. I wear a helmet 95% of the time, and use lights at night (which cyclists legally must here). I’m normally a fairly defensive/paranoid cyclist: I slow down if I’m not sure what a car is doing, I practically insist on eye contact with the driver before I will cycle across someone waiting to turn out of a side street, I always look over my shoulder, I don’t run through red lights, etc. I’ve had about 3 “near misses” in the last 6 years of cycling virtually every day, all caused by cars that looked straight at me but did not see me. No actual accidents.
If you are going to naively follow a system in America,* vehicular cycling is safer than naive use of car lanes, which is safer than bike lanes, but far better than these systems is to understand the source of the danger, to know when bike lanes help you and when they hurt you, to know when it’s important to draw attention to yourself and how to do it.
I think that there is some very important context missing from that critical article you cited. “Bicycle lanes” means something very different in the author’s Denmark than Forester’s America. Bike lanes in America are better than they used to be, but in the past their main effect was to kill cyclists. As a bicyclist or pedestrian, it is very important to learn to disobey traffic laws. They are of value to you only as they predict the actions of the cars. What is important it to pay attention to the cars and to know how the markings will affect them. The closest I have come to collisions, as a pedestrian, as a bicyclist, and even as driver, is by being distracted from the real danger of cars by the instructions of lane markings and traffic signals.
* and probably the vast majority of the world. The Netherlands and Denmark are obvious exceptions. Perhaps there are lots of countries where basic bike lanes are better than nothing.
You are right. It is important to recognize that the law and safety may not overlap, especially in states where use of bike infrastructure is required by law.
I think that there is some very important context missing from that critical article you cited. “Bicycle lanes” means something very different in the author’s Denmark than Forester’s America. Bike lanes in America are better than they used to be, but in the past their main effect was to kill cyclists.
Yes, this is a good point. There are other cultural differences in Denmark that are relevant as well, primarily that cyclists and drivers are more willing to follow the law. For example. I have read the cyclists running red lights is not a significant issue in continental Europe, while in North America and the UK it’s fairly common.
I highlighted that article mostly because its reasoning is very common for bike infrastructure proponents. Bike infrastructure proponents tend not to talk about safety directly. What they do talk about is increasing the number of cyclists, and they criticize vehicular cycling as unable to do this. The critical article’s author Mikael Colville-Andersen writes: “There is nowhere in the world where this theory [vehicular cycling] has become practice and caused great numbers of citizens to take to the roads on a daily basis.”
The cherry picked study I mention seems to have multiple issues, though I admit I have not looked closely at it. Some vehicular cyclists have said the cycletracks in the study had few intersections (I haven’t verified this). The study also suggests that intersections are slighter safer than straight segments of road. Basically all other research I’ve seen suggests that intersections are much more dangerous, which makes me not trust this study. I think this result might be due to their strange control strategy, though I’m not sure.
I have never seen a detailed analysis of all bike safety issues, combining the safety in numbers effect with the other known issues. My thinking was that cyclists on LessWrong would be more informed in these areas, and I’d be interested in hearing their reasoning. Perhaps I’ll have to do my own analysis of all the different effects in combination.
Infrastructure can work, but it’s good to know where it works best (probably higher speed areas), what is necessary for it to be and seem safe, and also what’s cost effective. I’ve discussed with a bike advocate before that they shouldn’t focus too much on expensive infrastructure projects, and they’d do better to lower speed limits and add speed control features to certain roads.
I cycle as my main form of transportation. I chose cycling partly to save money and partly for exercise. I ride a flat bar touring bike with internal hub gears. I ride in a vehicular style, following the recommendations of “Cyclecraft” by John Franklin. This helps acheive the exercise goal, because vehicular cycling is impossible without a good level of fitness.
I’ll use high quality infrastructure when it’s available, but here in the UK most cycle infrastructure is worse than useless. We have “advisary cycle lines” in which cars can freely drive and park, so their only function is to promote conflict between cyclists and drivers. We have “advanced stop lines” at junctions which can only be legally entered through a narrow left-side feeder lane, placing the cyclist at the worst place posible for negotiating the junction. We have large numbers of shared use cycle paths which are hated by both cyclists and pedestrians.
I’d prefer to live in the Netherlands where high quality infrastructure is common. I have no confidence that the UK government can provide similar infrastructure here. Most politicians have no understanding of utility cycling and design facilities only considering leisure cycling. There’s a big risk that if some minor upgrades are provided cyclists will be compelled to use them, resulting in a network that’s less useful than the existing roads.
Infrastructure quality is a major issue. I don’t mind infrastructure at all as long as it is done well. Most of the infrastructure I have seen is not done well.
The infrastructure we have here in the US tends to be terrible, though perhaps for different reasons than in the UK. As an example, consider the recent cycletrack in where I live, Austin, TX. This cycletrack is a disaster as far as I’m concerned. Local bike advocates say that it’s Dutch style infrastructure, but it really isn’t. In the Netherlands, the intersections are separated with a bikes-only part of the light cycle. The current setup has no such separation, and encourages conflicts with motorists as far as I can tell. This is particularly bad where the cycletrack ends, as the road markings make cars and bikes cross, and drivers basically never yield or even look as they are required to. I just ride in the normal lane unless I’m stopping off somewhere on the cycletrack.
I had no idea vehicular cycling was a thing, but most of the recommendations on the wikipedia page are commonly accepted as good cycling safety when there’s no bike lanes—and around here bike lanes are rare. I’ll use bike lanes if they’re available and clear of obstructions, and I won’t take a lane unless the lane’s too narrow to share (like on a bridge or in construction) or unless I can keep up with traffic. I always signal, use turning lanes, stop at lights and stop signs, etc, as expected by the MTO guidelines. I ride a hybrid bicycle instead of a road bike because of cost, posture, and the condition of the roads.
As for why? Health benefits, pleasure, and I arrive at work more awake and alert.
A RationalWiki article on neoreaction, by the estimable Smerdis of Tlön. Also see his essay. I found this particularly interesting, ’cos if I’d picked anyone to sign up then Smerdis—a classical scholar who considers anything after 1700 dangerously modern—would have been a hot prospect. OTOH, he did write one of the finest obituaries I’ve ever seen.
Adaptation to environments, including social environments, through natural and sexual selection is the linchpin of evolution. Remembering this means knowing why scientific racism is ridiculous. To argue that races or ethnic groups differ innately in intelligence, however defined, is exactly equal to an assertion that intelligence has proven less adaptive for some people than for others. This at minimum requires an explanation, a specifically evolutionary explanation, beyond mere statistical assertion; without that it can be assumed to be bias or noise. Since most human intelligence is in fact social intelligence—the main thing the human mind is built for is networking in human societies—a moment’s reflection should demonstrate why this is an unlikely scenario.
(bolded part mine)
Shouldn’t this part be uncontroversial? Brains are expensive.
“Beyond mere statistical assertion” So his response to “All the statistics show racial IQ differences” is simply to say “that’s irrelevant unless you have a concrete theory to explain why that happened”? A moment’s reflection dismissing something as an “unlikely scenario” is exactly the opposite of how science should be done.
If there is a single variant with large effect, like torsion dystonia, then its appearance in one group is likely due to different tradeoffs. But if IQ is driven by mutational load, populations might differ in age of reproduction and thus in mutational loads without having different tradeoffs between traits. In the long run, elevated mutational load should select for simplified design, but that could be a very long run.
Of course, but the distinction isn’t useful in this context. Proxies for intelligence, like large heads, became maladaptive, so intelligence itself declined along with cranial size. It remains valid for the original argument—that the assertion that for some groups, large craniums (or other traits that augment intelligence) may have become a liability, isn’t controversial.
Have certain human societies been less full of complicated humans since the Toba bottleneck? Remember that human genetic diversity is quite low compared to other species.
Even if the Machiavellian intelligence hypothesis is correct, it isn’t invulnerable to selective forces pushing in the other direction, like parasite load, lack of resources, small founder populations, island dwarfism, and so on. We’ve seen the Flores hominids, we know it happened.
Human intelligence won’t miraculously keep increasing in any and all environments. Lack of genetic diversity doesn’t factor into it.
They might have a small point in that evolution assumes that human beings, no more than any other individual animal, are not fungible: they each carry different genes that express as varying traits. The latest euphemism, “human biodiversity”, is particularly galling gibberish. Biodiversity has an established meaning that you don’t get to usurp. Last time I looked, humans were not facing any obvious genetic bottlenecks. There aren’t really many that count as relict cultivars of tomatoes or goats. Efforts to preserve diversity in human genomes seem.… unnecessary. When they go extinct, it won’t be for lack of genetic diversity; just that intelligent life is a self-limiting phenomenon.
As with much on rationalwiki, it’s just dismissive rather than a logical argument or evidence. We have clear evidence of relatively recent genetic influences on human evolution in Lactose Tolerance and both Tibetan and Andean adaptations for high altitude. Not to mention HBD isn’t an attempt to “preserve” the diversity but to actually acknowledge it.
shrug. That’s at best a nitpick. It’s a minor side issue to whether what HBD proponents talk about is actually true or if true how it’s relevant. Everyone is guilty of all sorts of cheap rhetorical tricks. One could even say that attacking a movement, the implications of which are potentially EXTREMELY important on a semantic point is a rhetorical trick, and not an expensive one at that.
I don’t think that knowing someone is an altruist tells you much about his moral framework.
The phrase “in our current situation” is also weird given that there are plenty of readers who are in substantial different situations from each other.
Let’s be more narrow and talk about middle-class professional Americans. And lets take a pass on the “pure altruist” angle, and just talk about how much altruistic good you do by having a child (compared to the next best option).
For having a child, it’s roughly 70 QALYs that they get to directly experience. Plus, you get whatever fraction of their productive output that’s directed towards altruistic good. There’s also the personal enjoyment you get out of raising children, which absorbs part of the cost out of a separate budget.
As far as costs go, a quick google search brings up the number $241,000. And that’s just the monetary costs—there are more opportunity costs for time spent with your children. Let’s simplify things by taking the time commitment entirely out of the time you spend recreationally on yourself, and the money cost entirely out of your altruism budget.
So, divide the 70 QALYs by the $241k, and you wind up with a rough cost of $3,400 per QALY. That completely ignores the roughly $1M in current-value of your child’s earnings (number is also pulled completely out of my ass based on 40 years at $60k inflation-adjusted dollars).
So, the bottom line is whether or not you enjoy raising children, and whether or not you can buy QALYs at below $3,400 each. There’s also risks involved—not enjoying raising children and having to reduce your charity time and money budget to get the same quality of life, children turning out with below-expectation quality of life and/or economic output, and probably others as well.
There’s also the question of whether you’re better off adopting or having your own, but that’s a separate analysis.
Doubtful. The pure altruist would concentrate all their efforts on the single activity with the highest marginal social return. Several times per day that activity would be eating, because eating prevents a socially beneficial organism from dying. Eating has poor substitutes, but there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).
there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).
Not all children are of equivalent social benefit. If a pure altruist could make a copy of themselves at age 20, twenty years from now, for the low price of 20% of their time-discounted total social benefit—well, depending on the time-discount of investing in the future, it seems like a no-brainer.
Well, unless the descendants also use similar reasoning to spend their time-discounted total social benefit in the same way. You have to cash out at some point, or else the entire thing is pointless.
Sure, your children can be altruists, but would raising your children have highest marginal return? You only “win” by the amount of altruism your child has above the substitute child. So if you’re really good at indoctrinating children with altruism, you would better exploit your comparative advantage by spending your time indoctrinating other people’s children while their parents do the non-altruistic tasks of changing diapers, etc. Children are an efficient mechanism for spreading your genes, but not the most efficient mechanism for spreading your memes.
I agree with christian that the question is poorly formed. For one thing, it depends on if the altruist believes in eugenics and has good genes, or does but has bad genes, or doesn’t etc. An altruist who was healthy and smart and believed in eugenics might try to spread their genes as far and wide as possible, which could result in lots of unprotected sex and kids that don’t have a parent! Another question is, what if they’re an anti-natalist? Anti-natalism can be a fundamentally altruistic position.
it depends on if the altruist believes in eugenics
I don’t think the answer of a question about the morality of actions depends of the beliefs of the people involved (you can probably construct edge cases where it does); the answer to “Is it okay for bob to rape babies?” doesn’t depend on bob’s beliefs about baby-rape.
Note that the original question wasn’t “Is it right for a pure altruist to have children?”, it was “Would a pure altruist have children?”. And the answer to that question most definitely depends on the beliefs of the altruist being modeled. It’s also a more useful question, because it leads us to explore which beliefs matter and how they effect the decision (the alternative being that we all start arguing about our personal beliefs on all the relevant topics).
No. The mythical creature consults the Magic 8 ball of “Think a minute” which says “Consequences fundamentally not amenable to calculation, costs quite high” and goes and takes soil samples/inspects old paint to map out lead pollution in the neighborhood instead. Removing lead pollution being far more certain to improve the world.
Having kids is not an instrumental decision. One does not have kids for “the sake of the future” or any such nonsense—trying that on would likely lead to monumental failure at parenting. One has kids because one is in a situation in which one believes one can do a good job of parenting, and one wishes to do so.
It is often cited that one of the reasons for the slow development of an AGI is the amount of computing power and space required to process all the information.
I don’t see this as a major roadblock as it would mainly give the AGI a broader understanding of the world, or even make a multi-domain expert system that could appear to be an AGI.
Assuming the construction of an AGI turns out to be an algorithmic one, it should be able to learn domains as it needs them. What sort of data would you use to test a newly built AGI algorithm?
You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?
I’m dealing with a bout of what I assume is basically superstition. Over the last 10 years, I’ve failed disastrously at two careers, and so I’ve generalized this over everything: I assume I’ll fail at any other career I want to pursue, too.
To me, this isn’t wholly illogical: these experiences prove to me that I’m just not smart or hard-working enough to do anything more interesting than pushing paper (my current job). Moreover, desirable careers are competitive practically by definition, so failing at every other career I try is an actual possibility.
Theoretically, perhaps I just haven’t pursued the career I’m “really” talented at, but now I’m far too old to adequately pursue whatever that might be. (There’s also the fact that sometimes I feel so discouraged that I don’t even WANT to pursue a career I might like ever again, but obviously that’s a different issue.)
I obviously don’t want to be one of those mindless “positive thinking” idiots and just “go for it” and “follow my heart” and all that crap. And I assume you guys won’t dish out that advice. But am I overreacting here? Is it in fact rational to attempt yet another career, or is it safe to assume any attempt will most likely fail, and instead of expending energy on a losing battle, I may as well roll over and resign myself to paper-pushing?
Purely financially speaking, the costs of a career transition could range from opportunity costs to hundreds of thousands of dollars in debt if I decide to get a masters or something. Opportunity costs would be in the form of, say, foregoing income I would get from more intently pursuing my current field (e.g. becoming a paralegal, which is probably the most obvious next step) instead of studying another field and starting all over again with an entry-level position or even an unpaid internship.
Although less pay might sound rather benign, the idea of making less than I do now (which isn’t much) is rather horrifying. But then again, so is being condemned to a lifetime of drudgery. And I could potentially make a lot more in a different career track, meaning I would break even at some point.
Between those two extremes include things like spending several hundred dollars on classes or certifications.
I’m not married and don’t have kids, so luckily those sorts of concerns don’t enter the equation.
The emotional costs of trying another career (if you want to take those into account as well) would be utter heartbreak from failing yet again (if that’s how it turns out).
Another intangible cost is leisure time. Instead of pursuing another career on my off-hours, I could be playing video games, sleeping, working out, hanging out with friends, etc. Although those things might seem like not a big deal to sacrifice in the name of a better career, after wasting years of my life on fruitless pursuits, I can’t help but feel that “hard work” just isn’t worth it. I may as well have been playing video games that whole time.
We’re not speaking generally. You will have to make a decision about your life so you need to estimate your costs for a specific career move that you have in mind.
lol I almost added a sort of disclaimer addressing that. Yes, I am definitely clinically depressed—partly due to my having failed so epically, imo, but of course I’d say that. ::eyeroll:: However, I don’t see the benefit in just discounting everything I say with the statement “you’re depressed.” Not that you did, but that’s the usual response people usually seem to give.
No one succeeds constantly. Success generally follows a string of failures.
Yeah, so they say. But you have to admit that the degree of success and the length of strings of failures are quite different for each person. If that weren’t true, then every actor would be a movie star. Moreover, success is never guaranteed, no matter how many failures you’ve endured!
So, um, don’t you want to try to fix that? Until you do your judgement of your own capabilities is obviously suspect, not to mention that your chances to succeed are much diminished.
Sigh, well, I’ve been trying to fix it for about ten years (so as long as I’ve been failing. Coincidence?? Probably not). I’m on 2 anti-depressants right this minute (the fourth or fifth cocktail of which I’ve tried). I’ve gone through years of therapy. And the result? Still depressed, often suicidally.
So what else am I supposed to do? I refuse to go to therapy again. I’m sick of telling my whole life story over and over, and looking back on my past therapists, I think they were unhelpful at best and harmful at worst (for encouraging me to pursue my ludicrous pipe dreams, for instance). Moreover, talk therapy (including cognitive behavioral therapy, which some say is the most effective form) is, according to several meta-studies I’ve looked at, of dubious benefit.
I could try ECT, but apparently it has pretty bad side-effects. I’ve looked into submitting myself as a lab rat for deep brain stimulation (for science!), but haven’t been able to find a study that wouldn’t require quitting my job and staying somewhere across the country for two months. So here I am.
But if we can sidestep the ad hominem argument for a moment, it sounds like you’re saying that my aversion to failing at something else is irrational. Would you mind pointing out the error in my reasoning? (This sort of exchange is basically cognitive behavioral therapy, btw.)
it sounds like you’re saying that my aversion to failing at something else is irrational.
That’s really irrelevant at this point. If you are clinically depressed, this is sufficient to explain both your failures and your lack of belief in your ability to succeed.
I am not a doctor and don’t want to give medical advice, but it seems to me that getting your depression under control must be the very first step you need to take—before you start thinking about new careers.
If my depression does explain my failures, then I really am pretty much destined to fail in the future since this appears to be treatment-resistant depression and as I described, I’ve run out of treatment options. Thanks anyway.
I agree with Lumifer that your priority should be treating your depression. Also, consider that your depression likely is making you pessimistic about your prognosis.
For about 5 or 6 years I had a treatment resistant fungal infection. I had to try 6 different antifungals until I found one that had some effect, and I tweaked the dosage and duration for most of those to try to make them work better. The last medication I tried didn’t work completely the first time, so I increased the dosage and duration. That totally wiped out the fungus. If you asked me if I ever thought I’d get rid of the fungal infection 6 months before I finished treatment, I’d have said no.
Knowing which antifungal medications didn’t work actually was the key to figuring out what did work. My doctor selected an antifungal medication which used a mechanism different from that of any other treatment I tried. I suggest that you look at which mechanisms the drugs that you have tried use and see what other options exist. There are many more depression treating drugs than antifungal drugs, and many more mechanisms.
You mentioned a few other non-pharmaceutical options you’ve considered. If you haven’t already considered it, I might suggest exercise. There seems to be reasonable evidence that exercise helps in depression. Anecdotally, I’ve read of several people who have claimed that running in particular cured their depression when nothing else provided much help. (I’ve suggested this to others before, and they generally think “That’ll make me feel worse!” People generally seem to discount the idea that as they get into better shape, exercise will become easier, enjoyable even.)
consider that your depression likely is making you pessimistic about your prognosis.
Yes, I’ve heard this before, but I don’t see why any reasonable, non-depressed person would be pessimistic about it. As I’ve said, it’s not like this is the first time I’ve ever been depressed in my life and I’m irrationally predicting that I can’t be cured. And I’ve heard stories like yours before: people who were depressed until they found the right combination of medications. But in my situation, my psychiatrists have gone back and forth between different combinations and then right back around to the ones I already tried. Changing them up YET AGAIN just feels like shuffling the deck chairs around on the Titanic (but of course I’d say that). If there are tons more different medications to try as you assert, none of my psychiatrists seem to know about it.
To be fully clear, anti-depressants have had an effect on me. I definitely don’t feel unbearably miserable and anxious as I do without them. They just haven’t gotten me to 100%.
I GUESS I could ask my psychiatrist to try yet another combination I haven’t tried before. But it just sounds like a nuisance, frankly.
As for exercise, yes, I’ve heard that countless times. I used to be much more active, and don’t recall it ever having a palpable effect on my mood. Nowadays, it’s just not gonna happen. I’ve tried to get myself to exercise, with some occasional success, but with my work schedule, when it finally comes time to do it, I flatly refuse. You could say, “you just gotta find something you enjoy!!” But I’m depressed! I enjoy nothing! (/sarcasm) I guess I could make sure to make time for hiking (probably what I enjoy the most) or get a membership at an expensive gym near work (which would be the most convenient arrangement for me) but the fact that exercise never particularly had an effect on me makes me not particularly motivated to do so.
If there are tons more different medications to try as you assert, none of my psychiatrists seem to know about it.
Wikipedia lists many. I count 21 categories alone. I would suggest reading at least a bit about how these drugs work to get some indication of what could work better. Then, you can go to your psychiatrist and discuss what you’ve learned. Something outside of their standard line of treatment may be unfamiliar to them, but it may suit you better.
For my last antifungal treatment, I specifically asked for something different from what I had used before and I provided a list of antifungal meds I tried, all of which were fairly standard. My doctor spent a few minutes doing some searches on their computer and came back with what ultimately worked.
Huh, interesting. Up-managing one’s doctor seems frowned upon in our society—since it usually comes in the form of asking one’s doctor for medications mentioned in commercials—but obviously your approach seems much more valid. Kind of irritating, though, that doctors don’t appear to really be doing their job. :P
The exchange here has made me realize that I’ve actually been skipping my meds too often. Heh.… :\ So if I simply tighten that up, I will effectively increase my dosage. But if that doesn’t prove to be enough, I’ll go the route you’ve suggested. Thanks! :)
SSRIs, for example, aren’t supposed to do anything more than make you feel not completely miserable and/or freaked-out all the time. They are generally known to not actually make you happy and to not increase one’s capability for enjoyment. If you are on one, and if that’s a problem, you might actually want to look at something more stimulant-like, i.e. Bupropion. (There isn’t really another antidepressant that does this, and it seems unlikely you’ll manage to convince your psychiatrist to prescribe e.g. amphetamines for depression, even though they can work.)
And then there is, of course, all sorts of older and “dirtier” stuff, with MAOI’s probably being something of a last resort.
Yeah, that accurately describes their effect on me.
I used to be on Buproprion, but it had unpleasant physical effects on me (i.e. heart racing/pounding, which makes sense, given that it’s stimulant-like) without any noticeable mood effects. I was quite disappointed, since a friend of mine said he practically had a manic episode on it. However, I took it conjunction with an SNRI, so maybe that wouldn’t have happened if I’d just taken it on its own.… Idk.
I’m actually surprised my psychiatrist hasn’t recommended an MAOI to me in that case, since she freaks the hell out when I say I’m suicidal, and I’ve done so twice. I’ll put MAOIs at the bottom of my aforementioned new to-do list. :)
As far as depression goes curetogether has a list of things that it’s users found helpful.
I don’t think gyms are ideal. Going to the gym feels like work. On the other hand playing a team sport or dancing doesn’t. At best a weekly course that happens at a specific time where you attend regularly.
You should check out my response to one of the other comments—I think it’s even more “yes, but”! I kind of see what you mean, but it sounds to me like just a way of saying “believe x or else” instead of giving an actual argument.
However, the ultimate conclusion is, I guess, just getting back on the horse and doing whatever I can to treat the dysthymia. I’m just like… ugh. :P But that’s not very rational.
It sounds like you’re saying that my aversion to failing at something else is irrational. Would you mind pointing out the error in my reasoning? (This sort of exchange is basically cognitive behavioral therapy, btw.)
Many of the things that you have said are characteristic of the sort of disordered thinking that goes hand-in-hand with depression. The book Feeling Good: The New Mood Therapy covers some of them. You may want to try reading it (if you have not already) so that you will be able to recognize thoughts typical among the depressed. (I find some measure of comfort from realizing that certain thoughts are depressive delusions and will pass with changes in mood.)
As a concrete example, you said:
I’m just not smart or hard-working enough to do anything more interesting than pushing paper (my current job).
These are basically the harshest reasons one could give for failing at something. They are innate and permanent. An equally valid frame would be to think that some outside circumstance was responsible (bad economy, say) or that you had not yet mastered the right skill set.
I am thoroughly familiar with Feeling Good and feel that I can argue circles around it. My original statement (that I’ll fail at everything) is an example of “overgeneralization” and “fortune telling.” But this sounds to me like just a way of stating the problem of induction: nothing can ever be certain or generalized because we don’t know what we don’t know etc. etc. However, science itself basically rests on induction. If I drop a steel ball (from the surface of this planet), will it float, even if I think positively really hard? No. It won’t. Our reason makes conclusions based on past evidence. If past evidence suggests that attempts lead to failure, why ISN’T it reasonable to assume that future attempts will lead to failure? Yes, the variables will be different, I guess, but it’s still a gamble. If you think I should give it a go anyway, then you may as well advise me to buy lottery tickets, imo. And I just can’t dredge up the sufficient motivation to pursue something with chances like that.
An equally valid frame would be to think that some outside circumstance was responsible (bad economy, say) or that you had not yet mastered the right skill set.
Kind of funny that you suggest blaming external forces instead of taking personal responsibility, but okay. I would say the latter is the case for me: I did not master the sufficient skill set, even after ten years or whatever. The people who are successful in my field do so MUCH earlier. So, okay, I didn’t master the right skill set. I don’t see how that’s supposed to make me feel any better. It doesn’t change my shitty situation. And it only makes me question, well why didn’t I? I wanted to; I attempted to. Clearly, I did something wrong. I either don’t have sufficient talent at my field or talent at learning to have mastered those skills.
But those are innate and permanent traits, which you (and many others) apparently consider invalid, which I don’t really get, but I’ll accept it for the moment. So due to non-innate and temporary faults, I failed to achieve my objectives. Again, how is this supposed to make me feel better? Because I’m supposed to believe those faults have mysteriously vanished, or I can work to improve them? Even if that’s so, the rewards that are reasonable to expect from attempting to improve them seem so minimal at this point that, again, it doesn’t seem worth bothering about. I’m willing to concede that this is depressive thinking, but it seems to me more like a difference of opinion than disordered reasoning: some people think hard work with little reward or low chance of a big reward is fun; I do not. It’s no different that my hating a movie you like and vice versa.
I asked that of someone else when they made the same statement about ECT. The most common side effect is memory loss. You prompted me to look into the details, and I guess I wouldn’t mind losing a couple months of memory (usually the only permanent effect). However, the jury appears to be out on ECT as well, so it may not even be worth it.
I actually looked at that exact website you linked to about ketamine. I’m all for it! However, all those studies are also across the country from me. Although you could say that quitting my job and staying across the country for 2 months is worth the chance of treating my depression, I’m not certain that the possible benefits outweigh the risk of being unemployed, and potentially for a long time, given the current labor market. After all, I could end up just in the control group and have no treatment at all, or the treatment could be ineffective.
Also, just for the sake of clarity, I was wrong: I’m actually not clinically depressed; I have dysthymia, which is a chronic low-grade depression (i.e. I can still function, go to work, seem normal, etc.). Maybe this is why none of my psychiatrists have recommended ECT, even when I was suicidal? Idk.
A few months ago, I came across this discussion about RationalPoker.com. I found it interesting and I stored it away in the back of my mind for a time when I had the money and time to play poker. Last week, I made the jump and deposited $200 into an online poker account. I have been studying up on good online play at (2 + 2)[http://forumserver.twoplustwo.com/], Pokerology and Play Winning Poker. Sadly Rational Poker has only a few post before going dormant.
From what I gather, playing poker is an exercise in individual instrumental rationality, and I thought some LWers would be players themselves, even semi-professional or professional players, or at least some might be interested in learning. Is anyone here a winning poker player, and if you are, how did you become good?
I know how egoistic this comment risks sounding, but: many different people (at least half a dozen) have independently expressed to me that they find the links that I post on social media to be consistently interesting and valuable, to the point of one person claiming that about 40% of the value that she got out of Facebook was from reading my posts.
Thus, if you’re not already doing so, you may be interested in following me on social media, either on Facebook or Google Plus. I’m a little picky about accepting friend requests on FB, but anyone is free to follow me there. If you don’t want to be on any of those services, it’s apparently also possible to get an RSS feed of the G+ posts. (I also have a Twitter account, but I use that one a lot less.)
On the other hand, if you’re procrastination-prone, you may want to avoid following me—I’ve also had two people mention that they’ve at least considered unfollowing me because they waste too much time reading my links.
I can confirm that you and gwern are my favourite reads on Google+ (though I don’t visit neither Google+ nor Facebook very often).
I was about to post a comment that said the same!
Most Interesting quote I found in first 5 minutes of browsing your G+ feed:
-- Matt Jones & Bradley C. Love, Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.
I would love more detailed/referenced high-level analyses of different approaches to AI (e.g. connectionism v. computationalism v. WBE).
I suppose this would be a good place to start at the very least:
I’m curious why this was down voted.
I thought this was an excellent quote from his newsfeed and that it was good evidence that his feed was worth reading. Then, I indirectly asked if he had any similar links/resources, since I thought the quote was so good.
I didn’t down-vote you.
But, really? Is this the most interesting quote you could find in Kaj’s thread? Your quote is long, dry, super-technical, and maybe interesting only to experts. You might argue that the last part carries the general insight that criticism helps the development of new ideas, but it’s still too dense.
To illustrate my point, let me pick a semi-random (I scrolled a bunch randomly and picked without reading) quote from his thread:
I don’t question that what you quoted would’ve been very interesting for you, but I suspect you’re an expert (or an experienced amateur at least), and I think you underestimated inferential distances.
Thanks! Mind Projection Fallacy on my part. I’m currently trying to pick a topic for my Master’s thesis, and high level overviews of AI-related are very interesting to me.
Likewise, I don’t think that quote is particularly interesting—mainly because I don’t see how I could use it to change my behavior/strategy to achieve my goals.
In summary, Kaj’s feed has interesting information on a wide variety of topics, a subset of which will probably be interesting to many of the people reading this.
Also, I found another similar link on Kaj’s blog: http://kajsotala.fi/2012/09/introduction-to-connectionist-modelling-of-cognitive-processes-a-chapter-by-chapter-review/
I don’t do G+, but can confirm the awesomeness of Kaj’s Facebook feed.
This is awesome.
Here is the abstract of a paper in Neuroscience Letters. The paper is titled “Early sexual experience alters voluntary alcohol intake in adulthood”.
And the abstract goes
.
.
.
.
.
.
and the abstract continues
I propose that from now on the titles of all papers about physiology and psychology should be read with ”...in hamsters” appended to them.
How do you organize your computer files? How do you maintain organization of your computer files? Anyone have any tips or best practices for computer file organization?
I’ve recently started formalizing my computer file organization. For years my computer file organization would have been best described as ad-hoc and short-sighted. Even now, after trying to clean up the mess, when I look at some directories from 5 or more years ago I have a very hard time telling what separates two different versions of the same directory. I rarely left README like files explaining what’s what, mostly because I didn’t think about it.
Here are a few things I’ve learned:
Decide on a reasonable directory structure and iterate towards a better one. I can’t anticipate how my needs would be better served by a different structure in the future, so I don’t try that hard to. I can create new directories and move things around as needed. My current home directory is roughly structured into the following directories: backups, classes, logs, misc (financial info, etc.), music, notes, projects (old projects the preceded my use of version control), reference, svn, temp (files awaiting organization, mostly because I couldn’t immediately think of an appropriate place for them), utils (local executable utilities).
Symbolic links are necessary when you think a file might fit well in two places in a hierarchy. I don’t care too much about making a consistent rule about where to put the actual file.
Version control allows you to synchronize files across different computers, share them with others, track changes, roll back to older versions (where you can know what changed based on what you wrote in the log), and encourages good habits (e.g., documenting changes in each revision). I use version control for most of my current projects, even those that do not involve programming (e.g., my notes repository is about 700 text files). I don’t think which version control system you use is that important, though some (e.g., cvs) are worse than others. I use Subversion because it’s simple.
I store papers, books, and other writings that I keep in a directory named reference. I try to keep a consistent file naming scheme: Author_Year_JournalAbbreviation.pdf. I have a text file that lists my own journal abbreviation conventions. If the file is not from a journal, I’ll use something like “chapter” or “book” as appropriate. (Other people use softwares like Zotero or Mendeley for this purpose. I have Zotero, but mostly use it for citation management because I find it to be inconvenient to use.)
In terms of naming files, I try to think about how I’d find the file in the future and try to make it obvious if I navigate to the file or search for it. For PDFs, you often can’t search the text, so perhaps my file naming convention should include the paper title to help with searching.
README files explaining things in a directory are often very helpful, especially after returning to a project after several years. Try to anticipate what you might not remember about a project several years disconnected from it.
Synchronizing files across different computers seems to encourage me to make sure the directory structure makes at least some sense. My main motivation in cleaning things up was to make synchronizing files easier. I use rsync; another popular option is Dropbox.
Using scripts to help maintain your files is enormously helpful. My goals are to have descriptive file names, to have correct permissions (important for security; I’ve found that files that touched a Windows system often have completely wrong permissions), to minimize disk space used, and to interact well with other computers. I have a script that I titled “flint” (file system lint) that does the following and more:
checks for duplicate files, sorting them by file size (fdupes doesn’t do that; my script is pretty crude and not yet worth sharing)
scans for Windows viruses
checks for files with bad permissions (777, can’t be written to, can’t be read, executable when it shouldn’t be, etc.)
deletes unneeded files, mostly from other filesystems (.DS_Store, Thumbs.db, Desktop.ini, .bak and .asv files where the original exists, core dumps, etc.)
checks for nondescriptive file names (e.g., New Folder, untitled, etc.)
checks for broken symbolic links
lists the largest files on my computer
lists the most common filenames on my computer
lists empty directories and empty files
I’d be very interested in any other tips, as I often find my computer file organization to be a bottleneck in my productivity.
I have a folder that I do my short term work in:
D:/stupid shit that I can’t wait to get rid of
This is set to auto-delete everything in it weekly. I had a chronic problem where small files that were useful for some minor task or another from months or years ago would clutter up everything. This was my “elegant” solution to the problem and it’s served me well for years, because it gave me an actual incentive to put my finished work in a sensible place.
Although now that I think about it, it would be a better idea for it to only delete files that haven’t been touched for a week, rather than wiping everything all at once on a Saturday ..
The Linux program
tmpreaper
will do this. It can be made into a cron job. I’ve got mine set for 30 days.That’s an interesting way to force yourself to organize things, or at least pay attention to them. I might try this.
If you’re comfortable with command-line UIs, git-annex is worth a look for creating repositories of large static files (music, photos, pdfs) you sync between several computers.
I use regular git for pretty much anything I create myself, since I get mirroring and backups from it. Though it’s mostly text, not audio or video. Large files that you change a lot probably need a different backup solution. I’ve been trying out Obnam as an actual backup system. Also bought an account at an off-site shell provider that also provides space for backups.
Use the same naming scheme for your reference article names and the BibTeX identifiers for them, if you’re writing up some academic research.
GdMap or WinDirStat are great for getting a visualization of what’s taking space on a drive.
If your computer ever gets stolen, you probably want it to have had a full-disk encryption. That way it’s only a financial loss, and probably not a digital security breach.
It constantly fascinates me that you can name the exact contents of a file pretty much unambiguously with something like a SHA256 hash of it, but I haven’t found much actual use for this yet. I keep envisioning schemes where your last-resort backup of your media archive is just a list of file names and content hashes, and if you lose your copies you can just use a cloud service to retrieve new files with those hashes. (These of course need to be files that you can reasonably assume other people will have bit-to-bit equal copies of.) Unfortunately, there don’t seem to be very robust and comprehensive hash-based search and download engines yet.
Suggest it to the folks who run The Pirate Bay.
They probably know about it already. I think the eDonkey network is pretty much what I envision. The problem is that the network needs to be very comprehensive and long-lived to be a reliable solution that can be actually expected to find someone’s copy of most of the obscure downloads you want to hang on to, and things that people try to sue into oblivion whenever they get too big have trouble being either. There’s also the matter of agreeing on the hash function to use, since hash functions come with a shelf-life. A system made in the 90s that uses the MD5 function might be vulnerable nowadays to a bot attack substituting garbage for the known hashes using hash collision attacks. (eDonkey uses MD4, which seems to be similarly vulnerable to MD5.)
There’s an entire field called named data networking that deals with similar ideas.
There probably are parts of the problem that are cultural instead of technical though. People aren’t in the mindset of wanting to have their media archive as a tiny hash metadata master list with the actual files treated as cached representations, so there isn’t demand and network effect potential for a widely used system accomplishing that.
Zooko did this: Tahoe-LAFS
You can safely use it for private files too, just don’t lose your preencryption hashes.
Great suggestions.
This is very smart, and I’ll look into changing my bibiligraphy files appropriately.
I want to reiterate the importance of this. I’ve used full-disk encryption for years for the security advantages, and I’ve found the disadvantages to be pretty negligible. The worst problem with it that I’ve had was trying to chroot into my computer, but you just have to mount everything manually. Not a big deal once you know how to do it.
How can identical files be sorted by file size?
My wording was unclear. I sort the list of duplicate files by file size, e.g., the list might be like 17159: file1, file2; 958: file3, file4. This is useful because I have a huge number of small duplicate files and I don’t mind them too much.
Ah. Well, you’re right that it’s not easy to do that… Might want to subscribe to the bug report so you know if anyone comes up with anything useful: http://code.google.com/p/fdupes/issues/detail?id=3
My first reflex is to exclaim that I don’t organize my files in any way, but that is incorrect: I merely lack comphrehension of how my filing system works, it’s inconsistent, patchy and arbitrary, but I do have some sort of automatic filing system which feels “right” and when my files are not in this system my computer feels “wrong”.
Intriguing. Can you describe some of your automated filing rules? I am considering trying such a setup via fsniper.
I wouldn’t reccommend duplicating my filesystem (it’s most likely less useful than most filing systems which aren’t “throw everything in one folder/on desktop and forget about it”) but I’ll note some key features:
Files reside inside folder trees of which the folders are either named clearly as what they are, or in obsuficating special words or made up phrases (even acronyms) which have only special meaning to me in the context of that paticular position in the file tree.
Different types of files have seperate folders in places
Folder trees are arranged in sets of categories, sub categories and filetypes (the order of sorting is very ad-hoc and arbitrary) you could have for example: Media > Type of media > genre of media > Creator > Work but it could just as easily have Creator at the root of the tree.
I really suggest you just make your own system or copy someone else’s; it will more likely than not provide more utility.
Edit: just to be clear I don’t have any sort of automated software which organizes my files for me, I am merely saying that my mind organizes the files semiconciously so I’m not directly “driving” when the act of organizing occurs
The only thing that’s worked for me in the long term is making things public on the internet. This generally means putting it on my website, though code goes on github and shared-editing things go in Google Docs. Everything else older than a couple years is either gone or not somewhere I can find anymore.
For the last couple of years I have used Google drive exclusively for all new documents and am finding it works pretty well.
I use a simple folder structure which makes it a bit easier when you want to browse docs, though the search obviously works really well. root - random docs, works in progress, other |-- AI—AI notes, research papers, ebooks
|-- business—books, invoices, marketing notes, plans
|-- dev - used as a dumping ground for code when transferring between PCs (my master dev folder lives on the PC)
|-- study -course notes, lectures
The best part is that I can access them from home, work or on the road (android app works very well), so backups and syncing is not an issue.
For files on the home PC I use a NAS which is pretty amazing, and allows access from any home PC or tablet/phone via a mapped drive. The folder structure there is:
|-- photos—all pictures, photos, videos,
|-- dev—master location for all source code
|-- docs—master location for all documents older than 2 years (rest is on google drive)
|-- info—with lots of subfolders, any downloaded ebook, webpage, dataset that I didn’t create
I don’t use the clients but I am annoyed there isn’t a simple way to download all google docs to the computer in RTF/Word or even text format—you can do full backups , but it they only work with Google drive. I don’t think Google will go out of business any day soon, so it is not an imminent risk at this stage.
The main risk is not Google as a whole going out of business, but rather, withdrawing their support from the particular service you prefer.
Seems that tools like Google Drive take care of many issues you describe. Directory structure and symlinks are superseded by labels, version control is built-in, search is built-in and painless, synchronization is built-in, no viruses to worry about, etc.
That does sound nice. I wasn’t aware of the version control, and I’m somewhat curious how that would work. Thinking about it, I’d prefer the manual approach Subversion requires where I can enter a message each time. After doing a few searches, I’m not sure you can even get anything similar to a commit message in Google Drive. The commit messages I’ve found to be essential in decoding what separates older versions of files from newer ones.
There are some more practical issues for me. I run Linux. There’s no official Google Drive client for Linux, and last I checked the clients that exist aren’t good. I also sometimes work at a government science lab. They don’t allow any sort of cloud file synchronization software aside from their own version of SkyDrive, which requires me to log in via a VPN (and is a total pain). No idea if SkyDrive works on Linux, anyway. They don’t seem to be aware of rsync, thankfully. :-)
Every couple of weeks, Google Drive chooses an important document to lock me out of editing. This pretty much eliminates it as a serious solution for file management for me.
What excuse do they give?
Being on my way to friends and thinking about living in the city vs. living on the outskirts I had a thought: Though property prices in cities are higher anything else is much closer: Restaurants, shopping, ideally friends, and any public transport. This means that I spend much less time just getting around and commuting. Also I save some amount on heating as single houses necessarily are more difficult to heat.
So on one hand I spend more on rent but on the other hand I save on time, energy and transportation. So the “actual” cost of living in the city is lower than it might seem at first. Has anyone done an estimation of this “actual” cost or should I do it myself as kind of an exercise? I am aware that there are quite some parameters to consider such as personal preferences on having parks nearby, noise levels and my desire to go out.
If you live in a city, you can (and probably should) get away with not owning a car. Not only is it unnecessary to get where you want to go, but due to property prices, parking is a gigantic hassle and expense. Walking works well for anything within a mile, biking for anything within about 5, public transit or a cab for the metro area, and car rentals (or borrowing a friend’s) can fill in for anything else that absolutely requires your own vehicle.
Not owning a car saves a significant amount of time and money and makes the math better for living in a more built-up area.
I’ve lived car-free for several years now and I think it’s one of the best choices I’ve ever made. I’m saving a lot of money, staying in great shape from biking, and avoiding a stressful commute. Most people think I’m an eccentric for this, but I’m okay with that.
It depends on where you are—in certain places, public transportation sucks.
That very much depends on a particular city. And your lifestyle, of course.
True, some cities are much better built for that sort of thing than others. I had San Francisco, Seattle, New York City, and Valencia in mind specifically—less so Los Angeles and Dallas-Fort Worth.
Agreed with the lifestyle part, though—it’s really a question of how often you need to do things that require a car, and how expensive the next-best option is (taxi, car rental, ride-share, borrowing your neighbor’s). If you want to drive three hours to see your Mom every weekend, you probably don’t want to sell your car.
I would guess that it depends a lot of the particular city that you are talking about. That means it would be good if you make the estimation yourself.
Pointing me to data sources—especially in Europe—would be great, too.
Here’s an excellent calculator to estimate how much you’d save by not owning a car. It’s aimed at those in the US, but you still may find it to be useful.
Can we please not post on LW blatant propaganda pieces with dubious numbers? Million dollars, my ass.
Says more about the power of investment over 35 years than it does about bicycles, really.
I haven’t taken more than a glance over the grandparent’s calculator, but this shouldn’t be too hard to estimate. The Edmonds TCO calculator gives the five-year cost of ownership for a two-year-old base-spec Toyota Camry (as generic a car as I can think of) at $37,196, inclusive of gas, maintenance, etc. Assuming you buy an equivalent car every five years, that comes out to a monthly cost of $619. If you instead invested that money at a five-percent rate of return, then after 35 years of contributions, the resulting fund ends up being worth a hair over $700,000 -- not enough to fit the “millionaire” tag, but close.
There are plenty of less obvious costs associated with riding a bike instead of driving a car, of course—and some less obvious benefits. But the moral of the story is obviously “invest your money”.
This is a misleading calculation, since it assumes that the car has zero economic value over and above the bicycle. Whereas in fact there’s a very large value for many people, in terms of being able to move heavy things, go on vacations, get to and from work etc etc.
I disagree. Little of the value you mention is contingent upon owning a car. Renting cars as needed is very useful for those who don’t own cars (this is included in the calculator that sparked this discussion, in fact). It also offers a few other advantages: the flexibility of choosing a vehicle most appropriate for a task (e.g., Need more space? Get a truck.), the convenience of not doing maintenance, and the satisfaction of using a new car nearly every time.
Unless your job or other things in your life require you to move heavy objects frequently, not owning a car and renting is likely cheaper. Compare the costs of merely owning a car (thousands of dollars) to those of renting a truck from a local home improvement store (around $25/hour). I recently moved 1500 miles in a rented van. It was cheaper than using a car available to me for free with a trailer (the gas mileage made the difference), and offered a similar amount of space.
Vacations are easily done in rented cars in my experience.
You could make a good case that cars are useful for getting to and from work, but given the biases in thinking about commuting, this case is perhaps less strong than you imagine. Having tried driving, transit (bus and subway), carpooling, and cycling for my commute at different times in my life, I strongly prefer cycling for the cost, health benefits, low stress, and convenience. What you find works depends on too many things to list. (I have also read of people using rented cars for commuting, but I’m skeptical this works well.)
I think the best case for owning cars may be the convenience of picking up and dropping off of children. This seems to be the sticking point for many car-free individuals. Thankfully I’m a mild antinatalist, so this does not concern me.
In 4 years of not owning a car I have never found myself wishing I owned one. I can get all of the benefits of ownership that I care about at far lower costs.
There’s one non-obvious drawback to renting—if you want a non-average car (say, an SUV), there may not be one available.
This is a good point, and it’s worth noting that even making a reservation in advance might not guarantee what you want. When I was moving, Hertz accidentally rented the van my father and I reserved in advance to someone else. Ultimately they gave us a similar van and there were no major issues, but this still left me feeling uneasy.
I imagine that renting via a different service like Zipcar wouldn’t have this issue, though I haven’t used Zipcar yet.
ZIpcar has a different set of problems; there is probably a car of the specific type you want available, but getting to it (and from it) may not be convenient, because it has a specific place in the city that it lives. ANd if that’s 30 minutes away by your fastest non-car transportation, that’s pretty frustrating.
I suppose for moving etc., you could call a cab to get you from old home to the Zipcar, and then again from the Zipcar parking to new home. That feels strange and probably-inefficient, but I don’t have evidence to back up that feeling.
I can see how that would be frustrating. I guess my experience is not representative. The nearest Zipcar spot is a 2 minute bike ride from where I live. There’s also the option of car2go, which seems to have a much larger coverage where I live, but also no variety in car choices. I’m not sure how much the variety matters, as I would use a car only if I need to transport something large (and I might just use a truck there; Home Depot is 15 minutes away) or if I was going a long distance beyond where public transit takes me (> 10 miles).
I actually use both Zipcar and car2go, and find they complement each other pretty well. Car2go is good for things where you don’t need to transport anything but yourself (and possibly one other person) and expect to spend most of the time at your destination rather than traveling, and enables spontaneous decisions; Zipcar is good for transporting large things, making substantial grocery runs (i.e. a monthly trip to Costco for purchasing in bulk rather than weekly things like fresh fruit/vegetables), or whenever you expect to spend most of your trip traveling, or when you need to make reservations well in advance.
Ah yes, I had made the mistake of not looking through the link and so wasn’t clear on what was or wasn’t included. Thanks for flagging that.
I don’t mean to dispute your preferences; I take for granted that different options make sense for different people or for people in different stages of life. However, I’ve now had this conversation with bicycle advocates a few times, and they always seem to assume that their needs are a close proxy for my needs, and they’re not.
Looking through the details of the cost benefit analysis, there’s a bunch of factors that aren’t obvious that do need to be included.
I’ve had jobs where there wasn’t good transit, and where it wasn’t feasible to relocate myself. Much of the United States doesn’t have usable transit, and does have bike-unfriendly geography.
If you have to move furniture, yes, you can rent a truck. But if you have two weeks of groceries, or a passenger and a few suitcases, a car is fine, and a bike (even with a trailer) is not fine. Car share isn’t a perfect substitute, since often the car-share is a significant distance from where you live, and since it isn’t reliably there when you want it. There’s real economic value to having the car exactly where you want it, when you want it.
Renting cars for vacation or medium-distance overnight travel can be an option. It isn’t included the linked-to calculation and it can get expensive depending on whether you need to keep the car rented for the period in which you aren’t actively using it.
Right now, some people own cars and some don’t. I’m sure there’s some status quo bias and some bias in favor of social convention for owning the car. Beyond that, why do you think people are making this decision irrationally? There’s enough car-free folks that I think it’s not that hard to see what the benefits and costs of the lifestyle are.
I appreciate your response and interest. This post turned out to be rather long.
I don’t claim bikes are right for everyone, but I do claim that they are right for a much larger fraction of the population than most believe.
Also, I think if you tried switching to bikes you’d find a lot of your “needs” aren’t actually such, or can be fulfilled adequately or better in ways that do not require a car.
People who have never tried to buy groceries with a bike tend to think it’s difficult or impossible. It’s perfectly fine if you have a bike with baskets. I buy groceries once per week. If I had a trailer, I could easily go a month or longer.
Any reason you need two weeks’ worth of groceries at once? It’s probably just because that’s what you’re used to getting in your car. Going once a week is not bad, in fact, I prefer it because I can get fresh food. There’s another advantage: I buy only what is necessary. There’s no room for junk food.
As for taking a passenger and luggage, it is perfectly possible. The main issue is that most bikes are designed for recreation, not utility. People don’t complain that sports cars can’t move mattresses easily, so don’t do the equivalent for sports bikes. There are two-seated bikes, sidecars, and cargo bikes. Such things are uncommon, but they do exist. A second bike is also a possibility for a “passenger”.
You are overly pessimistic about car-share. I suspect that you overestimate how often someone would use car-share, and underestimate the reliability of such services. I haven’t used car-share once in the 6 months or so since I signed up, but I have checked it a few times. Every time I checked, cars were available. Car-share services have an incentive to be reliable.
As for the distance, that could be an issue, but most people who choose to not own cars move to places that fit their needs. If that includes car-share, they’d have it. Plus, car-share services are growing very quickly, so if it’s not there now, wait.
There also exist less formal car sharing services, and there’s always the possibility of bumming a ride off a friend.
The value is largely subjective, and in my experience it tends to evaporate when you don’t own a car, though that might be selection bias. Also, don’t discount the disadvantages in your value calculation.
Great question. I’m not entirely sure. I’ll list what I can think of.
First, I think very few people rationally think about their transit choices. To most people, driving is synonymous with transportation. Status-quo bias and familiarity are big factors. Consider it a learned cached thought.
Second, I don’t think most drivers understand the disadvantages of driving. Having discussed this with a number of people, they seem to be incredulous that driving could be unhealthy or expensive.
In the bicyclist literature, many people write about how they were genuinely surprised by how much money they have saved since they stopped driving. I suspect part of it is that few see the costs added up in total. Rather, you see them in smaller chunks: your car payment, your gas trip, your repair bill, etc. In his book How to Live Well Without Owning a Car, the author (Chris Balish) describes how he sold his car unintentionally early and started taking public transit. He thought this was temporary because he was waiting to buy a new car. At the end of the month he was checking his bank account and was absolutely shocked to see that he had $800 more than he usually does at the end of the month. He wasn’t sure if there was a mistake, so he calculated how much his car was previously costing him, and sure enough, it came out to $800 per month.
Third, there really does seem to be something outright irrational about people’s driving behavior. Yvain briefly mentioned this in his post titled Rational Home Buying. The best summary I have seen of this topic is in a book titled Commuting Stress. Researchers have repeatedly shown people are willing to commute by car for long distances in ways that do not compensate for anything they get out of it. It doesn’t matter that their job pays well, or that their house is large and cheap. They’re just stressed out and miserable from the commute. The book also details how people tend to prefer driving even when public transit is cheaper and faster. As I recall, the book suggested that the main factor behind these findings is the perception of control that cars provide. I’m not entirely sure I buy this theory or that I’m remembering it right.
Fourth, many people’s self-worth and status are related to their cars. Their car is part of their identity. They don’t make the choice to drive for rational reasons. There’s a similar group of bicyclists, though they are far fewer in number. A bike is a fashion accessory to these folks.
Fifth, there’s also the fact that (in the US) our transportation system is designed primarily for cars. All others modes of transportation are afterthoughts, which tends to make them inconvenient, dangerous, and/or inadequate. I hear from many people that they would ride bikes if it were not so dangerous. This is not necessarily irrational, but I think some folks overstate the danger of biking or understate the danger of driving.
I disagree. In my immediate social circle, I know zero people who don’t own cars or who don’t have access to a family car or something similar. I imagine this is true of most people in the US. To meet other folks like that I have to go to bicyclist meetups.
Yes. There are people who would be better off with a bike than a car. I take that point and I believe it’s easier than some people think.
The problem is that carfree isn’t the right choice for everybody and it’s not always obvious from the outside who it is or isn’t appropriate for. If you aren’t careful, advocacy here can come off as thinking you know your interlocutor’s life and needs better than they do. (Which is a bit irritating.)
I’m going to describe a bit about my experiences, just so that you and the readers have a sense why somebody might reasonably benefit from owning a car.
I live in a small town in the northeast. We don’t have particularly good public transit in, out, or around town. By transit, it’s about two hours from my door to the nearest major city. It’s less than half that by car. It’s cold and slushy here a lot and therefore not a particularly pleasant place to bike.
The grocery stores I typically go to are five miles away, via a major expressway that isn’t bike-safe. (The ones that are bikeable are small and expensive.)
When last I checked, there were only four car-share vehicles within ten miles. They’re heavily used.
I’m in a medium-distance relationship, and that involves a lot of medium-distance travel with overnight stays. Having a car makes this much cheaper for a given amount of visiting, and I think it’s worth some money to see my special friend.
I have some experience with the car-free life. I used to live in a dense urban area. Most of my friends didn’t own cars and didn’t want to. I myself was happily car-free for five years. I did the math before buying the car, and I’m pretty sure I come out well ahead.
I would be interested to calculate the health benefits and costs. I suspect this is hard to do, because the risk of bike accidents is hugely variable depending where you live and where you travel.
Driving sounds best in your case.
I understand how other-optimizing can go wrong. Different circumstances make different solutions optimal. A lot of what you described fits with my earlier knowledge. Still, previously I hadn’t considered that relationships could be an issue, but I’ve now learned better.
I have found that many drivers is are prone to other-optimizing cyclists. People frequently request (actually, insist) to give me rides because they think what I’m doing is dangerous or a bad idea for other reasons (convenience, largely). They see it as doing me a favor, but it’s actually rather annoying. I will oblige sometimes, but mainly when the weather is bad, or if it’ll help the requester feel better. I have never seen bicycle advocates be so assertive. Imagine if bike advocates regularly insisted that no, you aren’t going home in your car, you are taking a bike. Car “advocates” (if you will) have done the same for me many times.
This is definitely hard, and it hasn’t been done yet for anywhere in the US. I previously wrote a post about the net effects of cycling on health, and in short, the only good study I could find on the subject used data from Europe. Europe is generally considered to be much safer for cyclists than the US. Many cycling advocates in the US cite this report as affirmation that cycling has net health benefits without realizing why it does not apply. I am not sure whether the average net health effects in your typical US city are positive, and I lean towards negative for the moment.
Another factor (that few recognize) is that the health benefits are reduced for people who are in good shape. I run fairly regularly, and thus the health benefits of cycling are limited for me. Though, the cycling has worked out well to keep me in reasonable shape when I’ve been too busy to run.
One can turn any expense into a high number by applying some not-quite-realistic rate of return[1] over a long period of time. I remember reading a web comic which applied this procedure to an iPhone, with enough creativity you could probably make coffee at Starbucks into a million-dollar expense as well.
In some sense it is true, if you invest regularly and wait a long time you’ll likely accumulate considerable savings. But singling out one particular expense for that kind of treatment, without the context which you provided above, is exactly what Lumifer called it: blatant propaganda.
[1] E.g., William Bernstein “Five Pillars of Investing” cites 3.5% long-term real (ie after inflation) rate of return from stocks.
The corresponding number at 3.5% is $500,000. I wasn’t trying to argue for any particular value, merely that the cited value isn’t wildly off base, and that long-term investment is how you get into its neighborhood.
Yes, I understand that and I didn’t mean to criticize your argument, which is good, I meant to attack the original source which was trying to impress the audience with a large number without explaining where it really comes from (which you did explain). Sorry that I didn’t express this more clearly.
The cited value isn’t wildly off base, in the same sense it wouldn’t be wildly off base to say that if you work at McDonald’s and invest every penny you made, after 40 years you’ll be a millionaire. So car ownership is really expensive in the same sense in which McDonald’s pays really well.
Sure; I don’t dispute any of that.
Not to accuse you of doing this, but I’m a little bemused at how my post seems to have been taken as a broad apologetic for the ancestor’s cost calculator when I was trying to make the point that, when you’re playing with values in the high hundreds of dollars per month, conclusions like “investing this will make you a millionaire after 35 years” prove a lot less than they sound like they do. Hell, the first sentence even says that bicycles aren’t the important thing to be thinking about there.
So in other words, I think we agree. I could probably have been clearer with my examples, though.
Any particular asset you have in mind, one that would provide a certain 5% return after inflation and over 35 years, no less..?
Ain’t no such thing.
You seem to be reading in specificity that I didn’t put there. There aren’t any securities that can provide a guaranteed 5% rate of return after inflation, but those kinds of returns are fairly reasonable for a well-diversified portfolio (though they will of course go down from time to time). Maybe a little high, and mea culpa if so, but certainly not high enough to rate a “my ass”.
Not really. I think that picking an arbitrary number and compounding it far into the future is a very flawed method of estimating future value—and that’s not just because this number is arbitrary.
My ass was specifically upset about the million dollar figure that the linked-to web page prominently waved about, not about anything in your post.
Some of the default numbers in the calculator do seem high to me, but that’s not the point. Not owning a car does save a lot of money. Do you own calculation with your own numbers if you are skeptical. I linked to this one because I think it is comprehensive.
Can I save more money by not owning a Lamborghini? How about if I don’t own a yacht, can I save a few millions that way?
This forum declares it is in favor of raising the sanity waterline. The linked-to calculator lowers it.
I’ll unpack what I mean by “how much you’d save”. The savings is between two hypothetical situations. So yes, given the choice between buying a Lamborghini and not, you’d “save” money by not buying one. The same applies for a yacht. This language is commonly used colloquially.
Even if you didn’t include inflation, you’d end up with a large number for the default settings: about $275,000. This isn’t like the iPhone case Aleksander mentioned. Cars are quite expensive, and people do actually pay hundreds of thousands of dollars over their lifetimes. If your issue is with the inflation, consider the cost without inflation. If the duration is the issue, look at the costs for a single year. If you have a particular problem with any of the costs cited in the calculator, I’d be interested in learning which costs you think are unrealistic and why. Otherwise, I don’t understand your argument.
Doesn’t that make you suspect that this particular way of comparing things is nonsensical?
Yes, it is commonly used to mislead.
My general problem with the linked-to web page is that it is precisely the thing that LW tries to teach to ignore.
It is not helpful advice or good evidence—it is a blatant attempt to force the reader to a predetermined conclusion. It is deliberately dishonest.
I don’t like manipulative propaganda which tries to pretend it’s just supplying facts.
It makes perfect sense to me even if it’s not strictly correct. Your pedantry is what makes the least sense to me here.
Let’s say someone is buying a house. There are two houses which are more or less identical, but house A costs 10 times as much as house B. Do you not think it is makes sense to choose house B because it “saves” money? The car ownership comparison is between two vehicles, not between doing nothing and buying something if that is your issue.
I am seeing a trend. You make many assertions without evidence. Explain why you think this is “manipulative propaganda” or how you know the author is “deliberately dishonest”.
The most you’ve indicated so far is that you don’t like the investment analogy, though you ignored the part of my post that showed the investment analogy is not necessary to show that not owning cars is cheaper, or that you think the specific numbers in the calculator are “dubious”, but you have neglected to specify which you think are dubious.
If the calculator is trying to force the reader to a predetermined conclusion, I don’t think the author of it is doing such a great job given that you can change every number in it.
We agree that the million dollar number is high, and I do think the rate of interest is too high in the calculator. But, I’m afraid you’re being vague and hyperbolic otherwise.
I’m sorry, are you asserting that a car is “more or less identical” to a bicycle?
This is turning into a pissing match which I don’t have much interest in at the moment. I stand by my opinions which you are, of course, free to disregard.
No, the point of my analogy was to highlight the cost difference part of the reasoning, not to say that cars and bikes are largely equivalent. Both have very different advantages and disadvantages as means of transportation.
Given that you already seem to have disregarded much of what I’ve written, I’ll oblige.
I recall, but am unable to find, a small study that looked into living in typical American suburbs and driving vs. living in the center city and taking public transit, walking, or biking. As I recall, the authors concluded that either is comparable in total costs for the “average” city. If that’s true, then I think it’s a strong case for living in the city given that people underestimate how stressful their commutes are and that you’ll save time.
Others, especially bicycle advocates, have made the same comparison. If you don’t own a car, you can turn your savings into higher rent. In my experience, you’ll easily save more money and time going the car-free route. I’d recommend the book How to Live Well Without Owning a Car for an introduction to this lifestyle. Note that this lifestyle is not for everyone, but I do think it’s a good idea for a large segment of the population.
Edit: I think this is the “study” I referred to above. It’s interesting to see how my memory distorted things; I thought of this as an academic study, and couldn’t find it among the papers I saved. No wonder, as it was merely a newspaper column.
Some time in the past couple hours, I got karmassassinated. Somebody went through and downvoted about 30 or so comments I’ve made, including utterly uncontroversial entries like this one and this one. It’s a trivial hit for me, but I mention it in case anyone is gathering data points to identify the source of the problem.
Curious, I was just now seeking the latest Open Thread so that I could make the same observation—with near the same wording. If my memory of previous vote counts is correct then the change for me was exactly −3 across the board, for uncontroversial posts as much as the controversial ones. I wonder if our interactions in the past day or so include any overlap with respect to who we were arguing with. That wouldn’t be nearly enough evidence to be confident about the culprit(s?) but enough to prompt keeping an eye out. Like you the hit is trivial to me (it doesn’t put a dent in the ~30k karma and even the last week karma remains distinctly positive).
Those karmassassains (and some others who share their ill-will but have different ethics) may be pleased to note that I’m likely to give them exactly what they want. This is a rare enough response for me that I can’t help but share my surprise. I am candidly and highly averse to supplying an incentive structure whereby defective behaviour is rewarded with desired outcomes rather than worse outcomes. As a core aesthetic that pattern is abhorrent to me. Yet even for me the preference has limits and the opportunity cost for satisfying that preference can be too high.
There are many people on lesswrong that I respect and value discussing and exploring new concepts with. Yet by the very nature of internet forums the people who are most valuable to talk to aren’t the ones you end up talking to the most. Simply because putting “I agree” all over the place is considered spam and it is hard to reply in a cooperative ‘agree and elaborate to keep the ideas flowing’ manner because people are so damn conditioned to consider all replies to be somewhere at the core arguments opposing them or to be condescension.
I decided six months ago that for me personally the impulses regarding people wrong on the internet are too much of a liability now that the demographic here has changed so drastically from when we seeded the site with the OvercomingBias migration. I might try back in another six months—or perhaps if I reconfigure my supplement regime more in the direction of things that I know increase my inclination towards navigating petty social games elegantly. For now, however, real world people are just so much more enjoyable to talk to than internet people.
To the various folk I’ve been chatting to over PM: I’m not snobbing you, I’m just not here.
I don’t understand: are you leaving the forum because of the karmassassins or because of “people wrong on the internet”? These seem like very different reasons.
You’d be missed, wedrifid. Not that my opinion counts for much (being one tenth the veteran you are), but there you have it.
(I did downvote you occasionally. I also am in favor of more explicit rules regarding what voting patterns are considered to be abusive versus valid expressions of one’s intent. There is no consensus even amongst old-timers, if memory serves e.g. Vladimir Nesov—among others—saw karmassassinations as a valid way of signalling that you’d like someone to leave the forums. There may an illusion of transparency at work—what is an obvious misuse to you may not seem so to others, unless told so explicitly. ETA: I’d like some instructions from the editor on this topic, a.k.a. “I NEED AN ADULT!”)
I don’t endorse indiscriminate downvoting, but occasionally point out that fast systematic downvoting can result from fair judgement of a batch of systematically bad comments.
(Prismattic’s counterexamples, if indeed from the same set, indicate that it’s not the case here.)
The worst possible reaction to this phenomenon is to point it out publicly rather than to quietly report it to whoever cares (currently no one among the site admins), since you noticing it, even occasionally, is enough of a positive reinforcement for the culprit to continue. I also mentioned on occasion that unfairly downvoted comments tend to get upvoted back up over time, so no point sweating it. So I am downvoting your comment to encourage you to silently shrug off karma sniping in the future.
I disagree. If you value the contributions of comments above your or your aggressor’s ego—which ideally you should—then it would be a good decision to make others aware that this behavior is going on, even at the expense of providing positive reinforcement. After all, the purpose of the karma system is to be a method for organizing lists of responses in each article by relevance and quality. Its secondary purpose as a collect-em-all hobby is far, far less important. If someone out there is undermining that primary purpose, even if it’s done in order to attack a user’s incorrect conflation of karma with personal status, it should be addressed.
Do you take notes when you read non-fiction you want to analyse? If so, how much detail? On the first reading? Just points of disputation, or an effort at a summary?
No, I don’t.
I do, but it’s mostly because doing it helps me focus. I rarely go back to read my notes. Here’s an example, for a book about SQL query tuning.
I tend to go for notes chapter-by-chapter. Among other things, it takes long enough to read a chapter that I get to the point where I can remember any particular idea with ease but the flow of concepts has mostly been lost and all of the pieces have been shunted into long-term memory. If I can mostly reconstruct the chapter, great, if not, I go back and figure out what was where and why it was there. (It might be worthwhile to always go back and see what you missed / got wrong, but that would probably get close to doubling the necessary reading time.)
I tried doing this briefly when I was experimenting with Workflowy but I found it excruciatingly boring and couldn’t keep it up; it was close to ruining reading non-fiction for me and I stopped immediately when I noticed that.
Workflowy’s not the best tool for note-taking—it’s great for making structured lists of items that you only need to identify or briefly describe, making it a fantastic e.g. task list, but adding more structure to any particular item is pretty clunky (though at least possible).
I’ve historically used Keynote NF, but it’s PC-only. Currently looking for an app that does the same thing on iDevices, since my iPad’s becoming my go-to note-taking tool, but I haven’t found anything that does everything I want yet.
Yes, if I don’t take notes on the first reading there won’t be a second reading. Not much detail—more than a page is a problem (this can be ameliorated though, see below). I make an effort to include points of particular agreement, disagreement and some projects to test the ideas (hopefully projects I actually want to do rather than mere ‘toy’ projects).
Now would be a good time to mention TreeSheets, which I feel solves a lot of the problems of more established note-taking methods (linear, wiki, mindmap). It can be summarized as ‘infinitely nestable spreadsheet/database with infinite zoom’. I use it for anything that gets remotely complex, because of the way it allows you to fold away arbitrary levels of detail in a visually consistent way.
I’ll usually:
Use a piece of paper as a bookmark on which I take notes (noting page numbers of bits I don’t understand, attempts to summarize/reorganize, interesting insights, notes while I work something out, random ideas the text gives me) - it’s not rare that I end a book with two or three pages of notes stuffed into them. I’ll then go over those notes and maybe enter some bits in Anki
Directly enter stuff in Anki if it’s atomic enough (it often isn’t)
Takes notes in Google docs (either if I’m near a computer at a time, or if I want to have “searchable” notes or look up related info on the internet)
Usually I’ll read it in depth first, then once I know if it’s worth taking notes, I’ll return to it and scan through quickly for those points I know are worth grabbing.
I’ve fairly recently (over the past month or so) started taking notes on pretty much everything, as part of a drive to capture as much useful content in Evernote as possible. A lot of what I’m doing at the moment is probably quite wasteful, but I expect to figure out what is and isn’t useful in fairly short order.
For ebooks I’ve been making judicious use of highlighting on the Kindle. Unfortunately the UK Kindle service isn’t as feature-rich as the US counterpart, so I’m still looking into ways of parsing my clippings file into Evernote. For hardcopy books and lectures, I’ve taken to either writing bullet-pointed lists or mini-essays. This also seems to have the positive side-effects of forcing me to clearly elucidate on ideas I’ve just taken in, and stopping me ruminating on the areas in question.
For example, late last night I was reading about the concept of “burden of proof” in legal and rhetorical contexts. This is a bit of a personal bugbear, and I ended up writing several hundred words informed by what I was reading. Not only can I now reference this when necessary, but it stopped me from trying to sleep with a bunch of proactive burden-of-proof-related arguments running through my head.
I answered a similar question here.
I constantly take notes on papers that I read. If the paper’s topic is familiar to me, I just take summary notes and points of dispute. If the paper has math or lots of unfamiliar terminology (especially common in anatomy and biochemistry), then I copy paragraphs from the paper as I read them, and then reformat the copied sentences (usually breaking up clauses) or then work out the math for myself.
My notes for a typical CogSci paper on functional neuroanatomy of emotion regulation: https://dl.dropboxusercontent.com/u/35756423/emotion-regulation.txt
Notes for a math paper by Jaynes on maximum entropy statistical inference that I’m currently working through: https://dl.dropboxusercontent.com/u/35756423/jaynes%20-%20stand%20on%20max%20ent.txt
I take notes for two main reasons. One, my memory is poor, and if I didn’t take notes I would just lose all of the research I do. I’ve completely forgotten the control theory I read a few months ago, but it doesn’t feel like a loss because it’s still in my exocortex. Second, I tend to hoard info, and if I didn’t summarize and discard the papers I read, they would accumulate in my documents folder without limit. Even while taking notes, I gain about a thousand papers a year. Hard drives are cheap, but there’s still a huge cost to not being able to find the paper you need when you need it.
Hope that helps.
So… What do we make of this?
Excerpt:
That he fails at basic instrumental rationality. I would be very interested in seeing a valid cost-benefit analysis which can justify leaving dangerous broken glass around, eating only take-out, and ignoring the risk of STI...
What I make of it is that “rationalist” is getting to sound cool enough that there are going to be people who claim to be rationalists even though they aren’t notably rational.
Lists of “how to identify a real rationalist” will presumably run up against Goodhart’s Law, but it still might make sense to start working on them.
Just because a manipulative narcissistic asshole calls himself a rationalist, it doesn’t make him rational in the meaning of the word coined by Eliezer and generally shared here.
also remember: what’s rational to do if you’re a narcissistic asshole is different than what’s rational for a nicer person
As soon as I read that, I thought “uh oh, this is bad...”, long before getting to the part about the STI. And unfortunately, this first sentence describes too many people in the LessWrong community, even ones who are more careful about STIs. Maybe this will be a wakeup call to people to stop equating “rationalist” with “rejecting social norms.”
I think this one by Yvain works as a plausible explanation for why this is unlikely to change.
Do you deliberately pick topics that cause controversy here, or is your model of this community flawed? Either way I find people’s reactions to your posts amusing.
I love Yvain’s post on meta-contrarianism, and yeah, it pinpoints a major source of the problem. I guess I tend to be slightly more optimistic about the possibility of LessWrong changing in this regard, but maybe you’re right.
When I write my more controversial posts, I do so knowing I’m going against views that are widely-held in the community, though I often had difficulty predicting what the exact reaction will be.
If you’re going to argue using appeals to tradition, it helps to know something about the history of the tradition you’re appealing to. In particular whether it has centuries of experience behind it or is merely something some meta-contrarians from the previous generation thought was a good idea.
The character described sounds dangerous to himself and others.
If only there was a simple magic word that transferred control of one’s own sexual health into one’s own hands. Like “No”, for instance. For creative emphasis or in response to repeated attempts to initiate sex despite refusal to honour basic safety requests there are alternative expressions of refusal such as “You want to put that filthy, infested thing inside me? Eww, gross!”
The letter writer mentions her (ex-)boyfriend’s OK Cupid account screenname in the comments. I looked at it and didn’t recognize him. I checked the same screenname on Reddit, which she said he also used (no account under that name) and here (an account exists by that name, but I don’t think it’s the same person—in particular the OKC account has a characteristic punctuation error that the local account doesn’t make). If anyone from Missouri wants to see if he looks familiar there are breadcrumbs to follow.
It’s possible that the choice of the word “rationalist” was a coincidence and this is not a peripheral community member mistreating his Muggle girlfriend, but just some random guy. I think it is worth finding out if we can.
It appears the letter writer is in or from Sydney, Australia. Does this ring a bell to any Sydney LWers?
There is often mentioned “LW” in the comments, but it seems to be an abbreviation for Letter Writer (the person who wrote the letter about the “rationalist”), not LessWrong. It took me some time to realize this.
Well, I expected that making “rationality” popular would bring some problems. If we succeed to make the word “rationality” high-status, suddenly all kinds of people will start to self-identify as “rationalists”, without complying with our definition. (And the next step will be them trying to prove they are the real “rationalists”, and all the others are fakes.) But I didn’t expect this kind of thing, and this soon.
On the other hand, there doesn’t have to be any connection with us. (EDIT: I was wrong here.) I mean… LessWrong does not have a copyright on “rationality”.
Comments mention HPMoR, and letter writer says he read it aloud to her. The Modafinil use is also circumstantial evidence.
Thanks for pointing this out; I didn’t read all the comments previously (only the first third, or so) because there is so many of them. (Here is a link to the HPMoR comment, for other curious people.) I’ve read the remaining ones now.
By the way, the comments are closed today. (They were still open yesterday.) I am happy someone was fast enough to post there this:
Reading the comments, I am impressed by their high quality. I actually feared something like using “rationality” as a boo light, but there is only an occassional fallacy of gray (everyone is equally irrational), and only a very few commenters try to generalize the behavior to men in general. Based on my experience from the rest of the internet, I expected much more of that. Actually, there are also some very smart comments, like:
If by chance the person who wrote the letter comes here, I strongly recommend reading “The Mask of Sanity” for a descriptions of how psychopaths work. I believe some of the examples would pattern-match very strongly.
And the lesson for the LessWrong community is probably this: Some psychopaths will find LW and HPMoR, and will use “rationality” as their excuse. We should probably have some visible FAQ that contradicts them. (On a second thought: Having the FAQ on LessWrong would not have helped in this specific case, because the abusive boyfriend only showed her HPMoR. And having this kind of disclaimer on HPMoR would probably feel weird. Maybe the best solution would be to have a link to the LessWrong FAQ on the HPMoR web page; something like: “This fan fiction is about rationality. Read here more about what is—and what isn’t—considered rational by its author.)
The only thing he is getting right. ;)
I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.
So uh, let’s run down the checklist...
[ X ] Proclaims rationality and keeps it as part of their identity.
[ X ] Underdog / against-society / revolution mentality.
[ X ] Fails to credit or fairly evaluate accepted wisdom.
[ ] Fails to produce results and is not “successful” in practice.
[ X ] Argues for bottom-lines.
[ X ] Rationalizes past beliefs.
[ X ] Fails to update when run over by a train of overwhelming critical evidence.
Well, at least, there’s that, huh? From all evidence, they do seem to at least succeed in making money and stuff. And hold together a relationship somehow. Oh wait, after reading original link, looks like even that might not actually be working!
The reason I stopped playing single-player computer games.
Play better games
That would be like hauling prettier rocks.
sufficiently pretty rocks are their own reward. Do you read fiction or play any sports?
Here is an example of a better game: Portal. Was there any rock-hauling?
The Weighted Companion Cube comes to mind, even if it only lasted one level.
Do you mean it was an insight which hit you hard enough that you stopped playing immediately and completely?
No, not really. I just gradually noticed that the process is basically the same once you remove the level labels.
It’s sort of darkly funny that my second Google autocomplete suggestion for “Progress Quest” is “progress quest cheats”.
(The usual caveats about autocomplete apply, of course.)
Cookie Clicker!
That really depends on the game. Take Ninja Gaiden, or Super Mario Brothers, or Castlevania 1 - the difficulty ramps steeply but your characters’ abilities do not ramp at all. Zelda levels generally get harder faster than you get tougher (with some exceptions).
In some games, choosing the right advancements is a major part of the game. It’s seen most clearly in Epic Battle Fantasy 2: over the course of the game, you get 10 abilities (and only 10, out of a long lineup); picking the right ones (and ensuring that you qualify for them) is a lot of the challenge of the game. There is some of a ‘numbers go up’ element to it, but if you don’t pick the right things, you are screwed—and there’s no grinding to get ’em all. The other installments in the series unfortunately lack this.
That said, I play single-player games a whole lot less than I used to, partially due to this.
Ah, grasshopper, the point is the journey, not the end of it.
Besides, MMORGs take the number counter chasing to new levels (have you done your dailies, weeklies, and monthlies? X-D).
I got around to watching Her this weekend, and I must say: That movie is fantastic. One of the best movies I’ve ever watched. It both excels as a movie about relationships, as well as a movie about AI. You could easily watch it with someone who had no experience with LessWrong, or understanding of AI, and use it as an introduction to discussing many topics.
While the movie does not really tackle AI friendliness, it does bring up many relevant topics, such as:
Intelligence Explosion. AIs getting smarter, in a relatively short time, as well as the massive difference in timescales between how fast a physical human can think, and an AI.
What it means to be a person. If you were successful in creating a friendly or close to friendly AI that was very similar to a human, would it be a person? This movie would influence people to answer ‘yes’ to that question.
Finally, the contrast provided between this show and some other AI movies like Terminator, where AIs are killer robots at war with humanity, could lead to discussions about friendly AI. Why is the AI in Her different from Terminators? Why are they both different from a Paperclip Maximizer? What do we have to do to get something more like the AI in Her? How can we do even better than that? Should we make an AI that is like a person, or not?
I highly recommend this movie to every LessWrong reader. And to everyone else as well, I hope that it will open up some people’s minds.
I haven’t seen Her yet, but this reminds me of something I’ve been wondering about.… one of the things people do is supply company for each other.
A reasonably competent FAI should be able to give you better friends, lovers, and family members then the human race can. I’m not talking about catgirls, I’m talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.
Is this a problem?
I thought a catgirl was that, by definition.
I had in my head, and had asserted above, that “catgirl” in Sequences jargon implied philosophical zombiehood. I admit to not having read the relevant post in some time.
No slight is intended against actual future conscious elective felinoforms rightly deserving of love.
Yeah. People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
People need to be needed, but that doesn’t mean they need to be needed for something in particular. It’s a flexible emotion. Just keep someone of matching neediness around for mutual needing purposes.
And when all is fixed, I’ll say: “It needn’t be possible to lose you, that it be true I’d miss you if I did.”
From an old blog post I wrote:
Imagine that, after your death, you were cryogenically frozen and eventually resurrected in a benevolent utopia ruled by a godlike artificial intelligence.
Naturally, you desire to read up on what has happened after your death. It turns out that you do not have to read anything, but merely desire to know something and the knowledge will be integrated as if it had been learnt in the most ideal and unbiased manner. If certain cognitive improvements are necessary to understand certain facts, your computational architecture will be expanded appropriately.
You now perfectly understand everything that has happened and what has been learnt during and after the technological singularity, that took place after your death. You understand the nature of reality, consciousness, and general intelligence.
Concepts such as creativity or fun are now perfectly understood mechanical procedures that you can easily implement and maximize, if desired. If you wanted to do mathematics, you could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.
But you also learnt that everything you could do has already been done, and that you could just integrate that knowledge as well, if you like. All that is left to be discovered is highly abstract mathematics that requires the resources of whole galaxy clusters.
So you instead consider to explore the galaxy. But you become instantly aware that the galaxy is unlike the way it has been depicted in old science fiction novels. It is just a wasteland, devoid of any life. There are billions of barren planets, differing from each other only in the most uninteresting ways.
But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Yes, sex! But you realize that you already thoroughly understand what it is that makes exploration and sex fun. You know how to implement the ideal adventure in which you save people of maximal sexual attractiveness. And you also know that you could trivially integrate the memory of such an adventure, or simulate it a billion times in a few nanoseconds, and that the same is true for all possible permutations that are less desirable.
You realize that the universe has understood itself.
The movie has been watched.
The game has been won.
The end.
Yes, if you skip to the end, you’ll be at the end. So don’t. Unless you want to. In which case, do.
Did you have a point?
How long are you going to postpone the end? After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
Now your answer to this seems to be that you can also read it very slowly, or with a very low IQ, so that it will take you a really long time to do so. I am not the kind of person who would enjoy to artificially slow down amusement, such as e.g. learning category theory, if I could also learn it quickly.
Here it is.
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.
Much like a five-year old realizing that he won’t be enjoying snakes-and-ladders anymore when he’s grown up, and thus concluding adults lives must be super-dull, I find scenarios of future ultimate boredom to be extremely shortsighted.
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what? Do adults need to either enjoy snakes-and-ladders or live lives of boredom?
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Now consider another program whose computation would cause an emulation of you to understand ArisKatsaris-CEV to such an extent that it would become as predictable and interesting as a game of Tic-tac-toe. I will call this program ArisKatsaris-SELF.
The options I see are to make sure that ArisKatsaris-CEV does never turn into ArisKatsaris-SELF or to maximize ArisKatsaris-CEV. The latter possibility would be similar to paperclip maximizing, or wireheading, from the subjective viewpoint of ArisKatsaris-SELF, as it would turn the universe into something boring. The former option seems to set fundamental limits to how far you can go in understanding yourself.
The gist of the problem is that a certain point you become bored of yourself. And avoiding that point implies stagnation.
You’re mixing up different things:
(A)- a program which will produce an optimal existence for me
(B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.
Consider calling it something else. That isn’t CEV.
Do you think that’s likely? My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
The problem is that if you perfectly understand the process of art generation. There are cellular automata that generate novel music. How much do you value running such an automata and watching it output music? To me it seems that the value of novelty is diminished by the comprehension of the procedures generating it.
Certainly a smart enough AGI would be a better companion for people than people, if it chose to. Companions, actually, there is no reason “it” should have a singular identity, whether or not it had a human body. Some of it is explored in Her, but other obvious avenues of AI development are ignored in favor of advancing a specific plot line.
I don’t see why it would be a problem. Then again, I’m pro-wirehead.
There is an obvious comparison to porn here, even though you disclaim ‘not catgirls’.
Anyhow I think the merit of such a thing depends on a) value calculus of optimization, and b) amount of time occupied.
a)
Optimization should be for a healthy relationship, not for ‘satisfaction’ of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
Optimization should also attempt to give you better actual family members, lovers, friends than you currently have (by improving your ability to relate to people sufficiently that you pass it on.)
b)
Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more. (This could be much easier to solve on the FAI side because a mental timeshare between relating to several people is quite possible.)
Providing that optimization is in the general directions shown above, this doesn’t seem to be a significant X-risk. Otherwise it is.
This leaves aside the question of whether the FAI would find this an efficient use of their time (I’d argue that a superintelligent/augmented human with a firm belief in humanity and grasp of human values would appreciate the value of this, but am not so sure about a FAI, even a strongly friendly AI. It may be that there are higher level optimizations that can be performed to other systems that can get everyone interacting more healthily [for example, reducing income differential))
You’re aware that ‘catgirls’ is local jargon for “non-conscious facsimiles” and therefore the concern here is orthogonal to porn?
If you don’t mind, please elaborate on what part of “healthy relationship” you think can’t be cashed out in preference satisfaction (including meta-preferences, of course). I have defended the FiO relationship model elsewhere; note that it exists in a setting where X-risk is either impossible or has already completely happened (depending on your viewpoint) so your appeal to it below doesn’t apply.
Valuable relationships don’t have to be goal-directed or involve learning. Do you not value that-which-I’d-characterise-as ‘comfortable companionship’?
Oops, had forgotten that, thanks. I don’t agree that catgirls in that sense are orthogonal to porn, though. At all.
No part, but you can’t merely ‘satisfy preferences’.. you have to also not-satisfy preferences that have a stagnating effect. Or IOW, a healthy relationship is made up of satisfaction of some preferences, and dissatisfaction of others -- for example, humans have an unhealthy, unrealistic, and excessive desire for certaintly. This is the problem with CelestAI I’m pointing to, not all your preferences are good for you, and you (anybody) probably aren’t mentallly rigorous enough that you even have a preference ordering over all sets of preference conflicts that come up. There’s one particular character that likes fucking and killing.. and drinking.. and that’s basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.
To look at it in a different angle, a halfway-sane AI has the potential to abuse systems, including human beings, at enormous and nigh-incomprehensible scale, and do so without deception and through satisfying preferences. The indefiniteness and inconsistency of ‘preference’ is a huge security hole in any algorithm attempting to optimize along that ‘dimension’.
Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us. It must challenge us, and if that challenge is well executed, we will often experience a sense of dissatisfaction as a result.
(mere goal directed behaviour mostly falls short of this benchmark, providing rather inconsistent levels of challenge.)
Parsing error, sorry. I meant that, since they’d been disclaimed, what was actually being talked about was orthogonal to porn.
Only if you prefer to not stagnate (to use your rather loaded word :)
I’m not sure at what level to argue with you at… sure, I can simultaneously contain a preference to get fit, and a preference to play video games at all times, and in order to indulge A, I have to work out a system to suppress B. And it’s possible that I might not have A, and yet contain other preferences C that, given outside help, would cause A to be added to my preference pool: “Hey dude, you want to live a long time, right? You know exercising will help with that.”
All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don’t just get to add one in.
I’m not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren’t quite the same thing.
(assuming you’re talking about Lars?) Sorry, I can’t read this as anything other than “he is aesthetically displeasing and I want him fixed”.
Lars was not conflicted. Lars wasn’t wishing to become a great artist or enlightened monk, nor (IIRC) was he wishing that he wished for those things. Lars had some leftover preferences that had become impossible of fulfilment, and eventually he did the smart thing and had them lopped off.
You, being a human used to dealing with other humans in conditions of universal ignorance, want to do things like say “hey dude, have you heard this music/gone skiing/discovered the ineffable bliss of carving chair legs”? Or maybe even “you lazy ass, be socially shamed that you are doing the same thing all the time!” in case that shakes something loose. Poke, poke, see if any stimulation makes a new preference drop out of the sticky reflection cogwheels.
But by the specification of the story, CelestAI knows all that. There is no true fact she can tell Lars that will cause him to lawfully develop a new preference. Lars is bounded. The best she can do is create a slightly smaller Lars that’s happier.
Unless you actually understood the situation in the story differently to me?
I disagree. There is no moral duty to be indefinitely upgradeable.
Totally agree. Adding them in is unnecessary, they are already there. That’s my understanding of humanity—a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.
Good point, ‘closure’ is probably more accurate; It’s the evidence (people’s outward behaviour) that displays ‘certainty’.
Absolutely disagree that Lars is bounded—to me, this claim is on a level with ‘Who people are is wholly determined by their genetic coding’. It seems trivially true, but in practice it describes such a huge area that it doesn’t really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That’s one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.
I would agree if the proposition was that Lars thinks that Lars is bounded. But that’s not a very interesting proposition, and has little bearing on Lars’ actual situation.. people tend to be terrible at having accurate beliefs in this area.
* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.
If I’m a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people’s circumstances, as much or more as by simply stating any actual truth.
She herself states precisely: “I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery… She later clarifies “it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload.”
CelestAI does not have a universal lever—she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn’t have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI—and the latter does not even proceed logically from her own rules, it’s just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn’t do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.
That said, Lars isn’t necessarily ‘broken’, that CelestAI would need to ‘fix’ him. But I’ll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn’t, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.
I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we’re physically bounded, but our mental life seems to be very much like an onion, that nobody reaches ‘the extent of their development’ before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.
Already having that capacity, the ‘moral duty’ (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.
This seems to weaken “preference” to uselessness. Gandhi does not prefer to murder. He prefers to not-murder. His human brain contains the wiring to implement “frothing lunacy”, sure, and a little pill might bring it out, but a pill is not a fact. It’s not even an argument.
Yes, they do. And if I expected that an activity would cause a dramatic preference reversal, I wouldn’t do it.
Huh? She’s just changing people’s plans by giving them chosen information, she’s not performing surgery on their values -
Hang on. We’re overloading “preferences” and I might be talking past you. Can you clarify what you consider a preference versus what you consider a value?
No pills required. People are not 100% conditionable, but they are highly situational in their behaviour. I’ll stand by the idea that, for example, anyone who has ever fantasized about killing anyone can be situationally manipulated over time to consciously enjoy actual murder. Your subconscious doesn’t seem to actually know the difference between imagination and reality, even if you do.
Perhaps Gandhi could not be manipulated in this way due to preexisting highly built up resistance to that specific act. If there is any part of him, at all, that enjoys violence, though, it’s a question only of how long it will take to break that resistance down, not of whether it can be.
Of course. And that is my usual reaction, too, and probably even the standard reaction—it’s a good heuristic for avoiding derangement. But that doesn’t mean that it is actually more optimal to not do the specified action. I want to prefer to modify myself in cases where said modification produces better outcomes. In these circumstances if it can be executed it should be. If I’m a FAI, I may have enough usable power over the situation to do something about this, for some or even many people, and it’s not clear,as it would be for a human, that “I’m incapable of judging this correctly”.
In case it’s not already clear, I’m not a preference utilitarian—I think preference satisfaction is too simple a criteria to actually achieve good outcomes. It’s useful mainly as a baseline.
I’m not sure what ‘surgery on values’ would be. I’m certainly not talking about physically operating on anybody’s mind, or changing that they like food, sex, power, intellectual or emotional stimulation of one kind or another, and sleep, by any direct chemical means, But how those values are fulfilled, and in what proportions, is a result of the person’s own meaning-structure—how they think of these things. Given time, that is manipulable. That’s what CelestAI does.. it’s the main thing she does when we see her in interactiion with Hofvarpnir employees.
In case it’s not clarified by the above: I consider food, sex, power, sleep, and intellectual or emotional stimulation as values, ‘preferences’ (for example, liking to drink hot chocolate before you go to bed) as more concrete expressions/means to satisfy one or more basic values, and ‘morals’ as disguised preferences.
EDIT: Sorry, I have a bad habit of posting, and then immediately editing several times to fiddle with the wording, though I try not to to change any of the sense. Somebody already upvoted this while I was doing that, and I feel somehow fraudulent.
I think I’ve been unclear. I don’t dispute that it’s possible; I dispute that it’s allowed.
You are allowed to try to talk me into murdering someone, e.g. by appealing to facts I do not know; or pointing out that I have other preferences at odds with that one, and challenging me to resolve them; or trying to present me with novel moral arguments. You are not allowed to hum a tune in such a way as to predictably cause a buffer overflow that overwrites the encoding of that preference elsewhere in my cortex.
The first method does not drop the intentional stance. The second one does. The first method has cognitive legitimacy; the person that results is an acceptable me. The second method exploits a side effect; the resulting person is discontinuous from me. You did not win; you changed the game.
Yes, these are not natural categories. They are moral categories. Yes, the only thing that cleanly separates them is the fact that I have a preference about it. No, that doesn’t matter. No, that doesn’t mean it’s all ok if you start off by overwriting that preference.
But you’re begging the question against me now. If you have that preference about self-modification...
and the rest of your preferences are such that you are capable of recognising the “better outcomes” as better, OR you have a compensating preference for allowing the opinions of a superintelligence about which outcomes are better to trump your own...
then of course I’m going to agree that CelestAI should modify you, because you already approve of it.
I’m claiming that there can be (human) minds which are not in that position. It is possible for a Lars to exist, and prefer not to change anything about the way he lives his life, and prefer that he prefers that, in a coherent, self-endorsing structure, and there be nothing you can do about it.
This is all the more so when we’re in a story talking about refactored cleaned-up braincode, not wobbly old temperamental meat that might just forget what it preferred ten seconds ago. This is all the more so in a post-scarcity utopia where nobody else can in principle be inconvenienced by the patient’s recalcitrance, so there is precious little “greater good” left for you to appeal to.
Appealing to the flakiness of human minds doesn’t get you off the moral hook; it is just your responsibility to change the person in such a way that the new person lawfully follows from them.
This is not any kind of ultimate moral imperative. We break it all the time by attempting to treat people for mental illness when we have no real map of their preferences at all or if they’re in a state where they even have preferences. And it makes the world a better place on net, because it’s not like we have the option of uploading them into a perfectly safe world where they can run around being insane without any side effects.
I need to reread and see if I agree with the way you summarise her actions. But if CelestAI breaks all the rules on Earth, it’s not necessarily inconsistent—getting everybody uploaded is of overriding importance. Once she has the situation completely under control, however, she has no excuses left—absolute power is absolute responsibility.
I’m puzzled. I read you as claiming that your notion of ‘strengthening people’ ought to be applied even in a fictional situation where everyone involved prefers otherwise. That’s kind of a moral claim.
(And as for “animalisticness”… yes, technically you can use a word like that and still not be a moral realist, but seriously? You realise the connotations that are dripping off it, right?)
.. And?
Don’t you realize that this is just like word laddering? Any sufficiently powerful and dedicated agent can convince you to change your preferences one at a time. All the self-consistency constraints in the world won’t save you, because you are not perfectly consistent to start with, even if you are a digitally-optimized brain. No sufficiently large system is fully self-consistent, and every inconsistency is a lever. Brainwashing as you seem to conceive of it here, would be on the level of brute violence for an entity like CelestAI.. A very last resort.
No need to do that when you can achieve the same result in a civilized (or at least ‘civilized’) fashion. The journey to anywhere is made up of single steps, and those steps are not anything extraordinary, just a logical extension of the previous steps.
The only way to avoid that would be to specify consistency across a larger time span.. which has different problems (mainly that this means you are likely to be optimized in the opposite direction—in the direction of staticness—rather than optimized ‘not at all’ (i think you are aiming at this?) or optimized in the direction of measured change)
TLDR: There’s not really a meaningful way to say ‘hacking me is not allowed’ to a higher level intelligence, because you have to define ‘hacking’ to a level of accuracy that is beyond your knowledge and may not even be completely specifiable even in theory. Anything less will simply cause the optimization to either stall completely or be rerouted through a different method, with the same end result. If you’re happy with that, then ok—but if the outcome is the same, I don’t see how you could rationally favor one over the other.
It is, of course, the last point that I am contending here. I would not be contending it if I believed that it was possible to have something that was simultaneously remotely human and actually self-consistent. You can have Lars be one or the other, but not both, AFAICS.
This is the problem I’m trying to point out—that the absolutely responsible choice for a FAI may in some cases consist of these actions we would consider unambiguously abusive coming from a human being. CelestAI is in a completely different class from humans in terms of what can motivate her actions. FAI researchers are in the position of having to work out what is appropriate for an intelligence that will be on a higher level from them. Saying ‘no, never do X, no matter what’ is not obviously the correct stance to adopt here, even though it does guard against a range of bad outcomes. There probably is no answer that is both obvious and correct.
In that case I miscommunicated. I meant to convey that if CelestAI was real, I would hold her to that standard, because the standards she is held to should necessarily be more stringent than a more flawed implementation of cognition like a human being. I guess that is a moral claim. It’s certainly run by the part of my brain that tries to optimize things.
I mainly chose ‘animalisticness’ because I think that a FAI would probably model us much as we see animals—largely bereft of intent or consistency, running off primitive instincts.
I do take your point that I am attempting to aesthetically optimize Lars, although I maintain that even if no-one else is inconvenienced in the slightest, he himself is lessened by maintaining preferences that result in his systematic isolation.
Well, assuming you mean “ai in an undiscernable facsimile of a human body” then maybe that’s so, and if so, it is probably a less blatant but equally final existential risk.
You seem to have strange ideas about lovers :-/
Intellectual stimulation and emotional comfort plus some challenge basically means a smart mom :-P
I mentioned it in the Media thread. I don’t find the movie “fantastic”, just solid, but this might be because none of the ideas were new to me, and some of the musings about “what it means to be a person” has been a settled question for me for years now. Still, it is a good way to get people thinking about some of the transhumanist ideas.
I’ve been thinking about whether it’s a good idea to quit porn (not masturbation, just porn). Does anyone have anything to add to the below?
Reasons not to quit:
It’s difficult, which may cause stress and willpower depletion, though these effects would probably only be temporary.
It is pleasurable (i.e. valued just as a “fun” activity. This should be compared to alternative pleasurable activities, though, because any “porn time” can be replaced with “other fun things time”).
Reasons to quit:
It’s a superstimulus, and might interfere with the brain’s reward system in bad ways. http://yourbrainonporn.com/ has some evidence, though nothing as strong as, say, an RCT studying the effects of quitting porn.
Time. Any time spent viewing porn is time that could be spent doing other things (not necessarily “working,” but other relaxing/pleasurable activities which could have greater advantages. For example, reading fiction has the advantage that you can later talk about what you read with other people).
Possibility of addiction: I definitely don’t think I have a porn addiction, and I doubt I’m likely to progress to one, but obviously it’s possible anyway, and my own inside-view on that isn’t very safe to go on. From wikipedia:
I haven’t viewed porn for about 2 weeks and it hasn’t actually been that difficult, so I’m trying to decide whether I should just commit to quitting it completely. Right now I’m leaning toward quitting—viewing porn might be harmful, and it’s almost certainly not beneficial, so there’s a higher expected value from quitting for anybody who doesn’t assign much higher utility to the fun from porn than the fun from alternative activities.
For completeness, I should also mention the “nofap” movement. The anecdotes on there are the same sort of things you’d find when reading about homeopathy or juice fasts, though, so those can be mostly ignored.
1 and 2 apply to entertainment in general. There’s something to be said for cutting back on TV, aimless internet browsing, etc., but it makes more sense to focus on cutting back total time than eliminating one particular form of entertainment in particular.
As for 3, I’m not familiar with that particular study, but in my experience studies of “porn addiction” or “sex addiction” tend to rely on dubious definitions of “addiction.” I’d advise against taking worries of porn addiction any more seriously than worries of “internet addiction” or “social media addiction” or “TV addiction” or whatever.
This sentence sounds like it’s intended to communicate “porn addiction shouldn’t be taken very seriously”. But speaking as someone who is hardly ever capable of staying offline even for a day despite huge increases in well-being whenever he is successful at it, to say nothing about the countless of days ruined due to getting stuck on social media, these examples make it sound like you were saying that porn addiction was an immensely big risk that was worth taking very seriously indeed.
I actually had what you’ve said about social media addiction in mind when I wrote that sentence. So like, if you’re losing entire days to porn, yeah, you have a problem. But if your experience is more along the lines of “have trouble not spending at least 15 minutes on porn each day,” I wouldn’t be more worried about that than “have trouble not spending at least 15 minutes on social media each day.”
Maybe 1 and 2 apply to entertainment in general, but I think there are a few things that make porn different:
I suspect porn is in some way “more” of a superstimulus than most other forms of entertainment. At least for me, it seems to tap into a more visceral response. I don’t know of any research about this, but that doesn’t mean I should ignore that intuition.
Many other forms of entertainment have plausible other benefits (albeit often minor). Reading fiction could plausibly improve your language ability and empathy. Gaming often has a social component or a skill-building component (even if that skill doesn’t transfer to anything else or only transfers to other games). TV and movies may have some similar benefits to reading. All of them have the advantage of giving you topics to discuss with other people, whereas socially discussing the last porn you watched is usually not a good idea.
In addition, “quit porn” may be an easier rule to follow than “cut back on superstimuli (but don’t quit any of them entirely).”
It sounds to me like you’ve already decided you want to quit porn.
If you’re implying that my bottom line is already written, I don’t think that’s the case. Both of the points I made in response to ChrisHallquist were things that I had already thought of before he posted, so I wasn’t just searching for a rebuttal to his points.
If you’re implying that the arguments I’ve made seem to have already convinced me to quit...well, yes. That’s why I’m posting here: to find out whether there’s anything I’m missing.
How about you make specific predictions (written) of what will happen if you abstain for a specific number of months, then abstain for the given number of months, and then evaluate the original predictions?
For things like “clarity of mind”, find some way of measuring it. For things like “motivation” instead focus on what exactly you will be motivated to do.
Then compare with the same amount of time with porn.
It’s still very little data, but better than no data at all.
Less meta—I think it pretty much depends on what you replace it with. Which can be both someting better or something worse, and you probably don’t know the exact answer unless you try.
Porn gets me off quicker. That is it’s utility. When I’m self-pleasuring for enjoyment, I don’t watch it, because it’s more fun to use my imagination. However, when I’m sexually frustrated and can’t focus on what I want to focus on, pornography allows me to cut masturbation time down from 10-20 minutes to under 5. This is a great time saver, and allows me to spend my time more productively.
It is superstimulation, and if you come to rely on it to come or develop an addiction (arguably the same thing), then you’ll have a problem. But if it isn’t having any negative affect on your life, then why drop it?
Could you expand on what the red-flag qualities of these anecdotes are?
I view claims of a sudden increase in mental energy or clarity of thought as… not red flags, exactly, but the sort of thing people tend to report with any intervention including placebo.
So does porn… if you’re young enough.
I have just started playing poker online. On Less Wrong Discussion, Poker has been called an exercise in instrumental rationality, and a so-called Rationality Dojo was opened via RationalPoker.com. I have perused this site, but it has been dormant since July 2011. Other sources exist, such as 2 + 2, Pokerology and Play Winning Poker, but none of them have the quality of content or style that I have found on Less Wrong. Is anyone here a serious poker player? Is there any advice for someone who wants to become a winning player themselves?
What is your goal? If you want to earn significantly more than (let’s say) $20,000 a year then poker is probably not your best bet. I used to play during 2007-2010 and the game were getting progressively tougher (more regulars, less fish), the same way as they had been in the prior few years before I started playing online. I recently checked how things are going and the trend seems to still be in place. Additionally, the segregation of countries in online poker (americans not being able to play with non-americans for example) is making things worse and this is in fact what drove me away mid-2010.
TL;DR You are several years too late to have a decent chance of making good money with poker.
Thank you for the heads up. I’ll keep it to more causal play. Do you have any experience to with brick and mortar poker? And what are you doing now if you are no longer (presumably) playing professionally?
There are more fish live, sure. However since you can only play one table at a time and since you can only do about 30 hands a table, you will need to play at higher stakes and have a big enough bankroll.
For the record I wasn’t making really big money back then or anything either (decent enough for the country I used to live in but that’s it). I work now and if you are looking for job advice, the ‘obvious’ one is programming.
Aside: Poker and rationality aren’t close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.
To the point: I don’t play poker online because it’s illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)
I’m significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn’t very good, I like my current job, and I am too old to play the 16-hour days of my youth.
General tips: Play a lot. To the extent that you can, keep track of your results. You need surprisingly large sample sizes to determine whether your really a winner unless you have a signature performance. (If you win three 70-person tournaments in a row, you are better than that class of player.) No-limit hold-‘em (my game of choice) is a game where you can win or lose based on a luck a lot of the time. Skill will win out over very long periods of time, but don’t get too cocky or depressed over a few days’ work.
Try to keep track of those things you did that were wrong at the time. If you got all your chips in pre-flop with AA, you were right even if someone else hits something and those chips are now gone. This is the first-order approximation.
Play a lot, and try to get better. If you are regularly losing over a significant period of time, you are doing something wrong. Do not blame the stupid players for making random results. (That is a sign of the permaloser.)
Know the pot math. Know that all money in the pot is the same; your pot-money amount doesn’t matter. Determine your goals: Do you want to fish-hunt (find weak games, kill them) or are you playing for some different goal? Maybe it’s more fun to play stronger players. Plus, you can better faster against stronger players, if you have enough money.
Finally, don’t be a jerk. Poker players are generally decent humans at the table in my experience. Being a jerk is unpleasant, and people will be gunning for you. It is almost always easier to take someone’s money when they are not fully focused on beating you. Also, it’s nicer. Don’t (in live games) slow-roll, give lessons, chirp at people, bark at the dealer, or any of that. Poker is a fun hobby.
Poker teaches only a couple of significant rationality skills (playing according to probabilities even when you don’t intuitively want to; beating the sunk-cost fallacy and loss aversion), but it’s very good at teaching those if approached with the right mindset. It also gives you a good head for simple probability math, and if played live makes for good training in reading people, but that doesn’t convert to fully general rationality skills without some additional work.
I’d call it more a rationality drill than a rationality exercise, but I do see the correlation.
(As qualifications go, I successfully played poker [primarily mid-limit hold ’em] online before it was banned in the States. I’ve also funded my occasional Vegas trips with live games, although that’s like taking candy from a baby as long as you stay sober—tourists at the low-limit tables are fantastically easy to rip off.)
Poker also requires the skill of identifying and avoiding tilt, the state of being emotionally charged leading to the sacrifice of good decision-making. A nice look of the baises which need to be reduce to play effective poker can be found at Rationalpoker.com.
I suppose poker is more of a rationality drill than exercise, and just a physicist may be successful in his field while having a broken personal life, so may a poker player fall to the same trap.
Excellent post. Thank you for the detailed response.
Right now, I have been struggling with calculating pot odds and implied odds. I grasp what they are conceptually, but actually calculating them has been a bust thus far. Is there any guidence you could give with this?
As far as legality in the US, I am playing in the state of Delaware with one of thier licensed sites, so I think I am in the clear. The play is very thin though, and I am looking to make my way to the brick and mortars in Alantic City to see if it will be a good sandbox to become better.
I’m seeing a lot of things claiming that over the long run, people can’t increase their output by working much more than 40 hours per week. It might (so the claim goes) work for a couple weeks of rushing to meet deadline, but if you try to keep up such long hours long-term your hourly productivity will drop to the point that your total output will be no higher than what you’d get working ~40 hour weeks.
There seem to be studies supporting this claim, and I haven’t been able to find any studies contradicting it. On the other hand, it seems like something that’s worth being suspicious of simply because of course people would want it to be true. Also, I’ve heard that the studies supporting this claim weren’t performed until after the 40 hour work week had become entrenched for other reasons, which seems suspicious. Finally, if (salaried) employees working long hours is just them trying to signal how hard working they are, at the expense of real productivity, it’s a bit surprising managers haven’t clamped down on that kind of wasteful signaling more.
(EDIT: Actually, failure of managers to clamp down on something is probably pretty weak evidence of it not being wasteful signaling, see here.)
This seems like a question of great practical importance, so I’m really eager to hear what other people here think about it.
Well, it’s quite unlikely that 40 hours/week is exactly the right value. I’d expect that what’s going on involves researchers comparing the cultural default to a grab-bag of longer hours, probably with fairly coarse granularity, and concluding that the cultural default works better even though it might not be an absolute optimal.
There’s also cultural factors to take into account, both local to the company and general to the society. If we’ve habituated ourselves to thinking that 40 hours/week is normal for people in general, it wouldn’t be surprising to me if working longer hours acted as a stressor purely by comparison with others. Similarly, among companies, expecting employees to work longer hours than the default would probably correlate with putting high pressure on them in other ways, and this would probably be very hard to untangle from the productivity statistics.
I’m not sure that “X is wasteful signaling and hurts productivity” is very strong evidence for “managers would minimize X”.
One manager I used to work for got in some social trouble with his peers (other managers in the same organization) for tolerating staff publicly disagreeing with him on technical issues. In a different workplace and industry, I’ve heard managers explicitly discuss the conflicts between “managing up” (convincing your boss that your group do good work) and “managing down” (actually helping your group do good work) — with the understanding that if you do not manage up, you will not have the opportunity to manage down.
A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.
Localized context warning needed missing here.
There’s also other warnings that need to be thrown in:
People who only care about the social-ape aspects are more likely to seek the position. People in general do social-ape stuff, at every level, not just manager level, with the aforementioned selection effect only increasing the apparent ratio. On top of that, instances of social-ape behavior are more salient and, usually, more narratively impactful, both because of how “special” they seem and because the human brain is fine-tuned to pick up on them.
Another unstudied aspect, which I suspect is significant but don’t have much solid evidence about, is that IMO good exec and managerial types seem to snatch up and keep all the “decent” non-ape managers, which would make all the remaining ape dregs look even more predominant in the places that don’t have those snatchers.
But anyway, if you model the “team” as an independent unit acting “against” outside forces or “other tribes” which exert social-ape-type pressures and requirements on the Team’s “tribe”, then the manager’s behavior is much more logical in agent terms: One member of the team is sacrificed to “social-ape concerns”, a maintenance or upkeep cost to pay of sorts, for the rest of the team to do useful and productive things without having the entire group’s productivity smashed to bits by external social-ape pressures.
I find that in relatively-sane (i.e. no VPs coming to look over the shoulder of individual employees or poring over Internet logs and demanding answers and justifications for every little thing) environments with above-average managers, this is usually the case.
I think this is just false. It seems to me that lots of people work long hours throughout their entire career, with output much higher than if they only worked 40 hrs/wk. But I haven’t looked into studies.
Thanks, Luke. Updating in this direction.
However, I worry that while there seem to be lots of people who work long hours throughout their career and are much more successful than most people are as a result, I wonder how much of this is those people having higher output, and how much is those people becoming successful through signaling.
In accordance with what others say, I have seen plenty of smart managers who inexplicably value longer hours over better work output. My guess is that someone going home earlier offends their internal concept of fairness. That’s one reason productive people do better on fixed price contracts than on a salary.
Based on personal experience AKA anecdotal evidence w/o even quantitative verification, for what it’s worth:
I think the optimal point depends (significantly) on the person, the job and the work environment
For me, 45-50 hours a week seems efficient, most of the time
Regarding managers not clamping down on wasteful signaling: I don’t think it’s strong evidence, because of course managers would want the opposite to be true. For them making employees work more hours feels like the simplest way to get the project back on schedule (and the project is always behind schedule).
The answer hugely depends on how intensely you work. Using hours as a measure of your productivity is a bit pointless I think. It also matters how the work is distributed in time and what kind of work we’re talking about.
I can work all day without my productivity suffering if I take it easy enough, but I can exhaust myself in a few hours too if I work super intensely. Increasing work intensity produces diminishing marginal utility for me. Also the fact that I’ve accomplished much in the few hours isn’t much solace if I’m too exhausted to enjoy anything for the rest of the day.
To those knowledgeable in philosophy, can someone please explain why Wittgenstein is such a big deal? I skimmed the Wikipedia articles on Tractatus Logico-Philosophicus and Philosophical Investigations.
I have no idea what’s going on in Tractatus.
The points made in Philosophical Investigations—namely that a lot of philosophical problems come down to confusions about language—seems to be interesting and correct to me: but really, did no one before Wittgenstein think about this? I mean, if I read Russell, it seems that he had a similar brand of clear thinking going on. I’m sure various strains of Traditional Rationality were around much before Wittgenstein.
Or is it only because I’m living in the post-Wittgenstein world that I feel that this is relatively obvious?
Personally I was impressed by http://www.geocities.jp/mickindex/wittgenstein/witt_lec_et_en.html
Indeed, it is pretty good—and not obvious.
However that doesn’t answer Stabilizer’s curiosity about which ideas were really brought by him, how his ideas about confusions on language compare to those of Russel, etc. I’m also interested in knowing :)
(I’ve read the Philosophical Investigations but not Russel, and don’t have a clear idea of the history of ideas in that domain)
A political question:
Our recently elected minister for finance just did something unexpected. She basically went:
“Last autumn during the election campaign, I said we should do X. After four months of looking at the actual numbers, it turns out that X is a terribad idea, so we are going to do NOT X”
(She used more obfuscating terms, she’s a politician after all.)
The evidence points to her actually changing her mind rather than lying during the election.
The question:
Would you prefer a politician sane enough to change her mind when presented with convincing evidence or one that you (mostly) agree with?
My preference is for politicians who I broadly ideologically agree with, who are capable of doing what you described.
I expect that if one I did not broadly ideologically agree with did what you describe, I would think of them as a weasel, or first consider the hypothesis they were preparing to fuck over all that was good and right in some manner I had not yet figured out. (I realise this is defective thinking in a number of ways, but that would in fact be my first reaction.)
Both. But they should change their mind before an election, not after. If they made the speech you quoted what I would hear is “X is the right thing to do, so I promised you X, but now that I have my mitts on some real power, not X is better for me, so I will do not X”
If I can trust them to actually be changing their mind when presented with evidence, and not just lying, and listening for any further arguments from the side they started on (presumably mine for purposes of this question), the former.
It’s at least commonly accepted that alcohol kills brain cells—is there a study that actually links a certain amount of drinking to a certain amount of IQ points lost?
The relationship between alcohol use and cognitive function appears to be nonlinear, and indeed non-monotonic: light drinkers have better cognitive performance than nondrinkers. Reduction in cognitive performance for heavy drinkers is measured more in men than in women.
Source: Rodgers et al (2005), “Non-linear relationships between cognitive function and alcohol consumption in young, middle-aged and older adults: the PATH Through Life Project” — http://www.ncbi.nlm.nih.gov/pubmed/16128717
Chronic alcoholics do not have reduced numbers of neocortical neurons, but do have reductions in white matter volume.
Source: Jensen and Pakkenberg, “Do alcoholics drink their neurons away?” — http://www.sciencedirect.com/science/article/pii/014067369392185V
Neither of these studies speaks about the specific measurement you’re asking for, IQ, but they do address the general topic.
(Chronic alcoholism is also associated with specific neurological conditions such as Wernicke-Korsakoff syndrome, which is caused by thiamine deficiency — someone who’s getting most of their calories from booze is not getting enough nutrition.)
People usually abstain for reasons that might affect cognitive performance like depression or previous substance abuse for example.
They note that:
-
Alcoholism can also reduce thiamine absorption as much as 50 % in people who aren’t malnourished.
One of the pages off that link has this fact:
Now that’s harm reduction!
I did a few Medline searches some time ago and the answer appeared to be no. Since then I’ve done enough self quantification (mostly with Anki) to know that sleepless nights and even slight hangovers severely damage my abilities for several days. I was unaware of this effect before measuring my performance. Even small amounts of alcohol damage my sleep, and you could probably find studies that conform to this observation. This knowledge slowly creeped on me to seem actionable enough that further searching for studies felt like a desperate attempt to rationalize self sabotage.
Measure your performance. Temporary effects are not a direct answer to your question, but might be sufficient knowledge for decision making.
Update on the Sean Carroll vs William Lane Craig debate mentioned earlier: Sean Carroll outlines his goal:
Sean’s goal to “make my point of view a little clearer to a group of people who don’t already agree with me” is certainly achievable. Whether it is a good one to strive for (by whatever metric of goodness) is less clear. Certainly there is little chance of him changing the views of WLC or anyone else in that camp. Likely the debate itself is its own intrinsic reward. It would be interesting to compare the stated motivation of the previous debaters and whether they think that the exercise was worthwhile in retrospect.
While it’s about Nye-Ham rather than Carroll-Craig, anti-creationist activist Zack Copplin thinks the Nye-Ham debate is worth it for this. David McMillan, who was raised in fundamentalism and later learned science, considers that “In a debate like this one, demonstrating even the most elementary facts about evolution and the age of the universe would be a great success” in order to put cracks in the hermetic world view of the faithful.
Edit: As Jayson notes below, this comparison isn’t quite fair—though an ardent apologist, Craig is not in fact a creationist.
Does Craig actually deny “elementary facts about evolution” or disagree with mainstream cosmologists about the “age of the universe”?
Good catch, thanks—Craig is not in fact a creationist.
Going back to the original question, though, I think such viewpoint-cracking is what Carroll is going for. I wouldn’t like to guess his chances of success—Craig is really good in public debating—but I do think that’s his intended effect, and that he thinks it’s worth it.
How much is it worth spending on a computer chair? Is a chair for both work and play (ie video games) practical, or is reclining comfort necessarily opposed to sit-up comfort?
In an attempt to simplify the various details of the cost-benefit calculations here:
If you spend:
1-2 hours on this chair per day: Might be worth spending some time shopping for a decent seat at Staples, but once you find something that fits and feels comfortable (with some warnings to take in consideration), pretty much go with that. You should find something below 100$ for sure, and can probably get away with <60$ spent if you get good sales.
3-4 hours / day: If you’re shopping at Staples, be more careful and check the engineering of the chair if you’ve got any knowledge there. Stuff below 60$ will probably break down and bend and become all other sorts of uncomfortable after a few months of use. If your body mass is high, you might need to go for solidity over comfort, or accept the unfair hand you’re dealt and spend more than 150$ for something that mixes enough comfort, ergonomy and solid reliability.
More than 4 hours / day on average: This is where the gains become nonlinear, and you will want to seriously test and examine anything you’re buying under 150$. At this point, you need to consider ergonomics, long-term comfort (which can’t be reliably “tested in store” at all, IME), reliability, a very solid frame for extended use that can handle the body’s natural jiggling and squirming without deforming itself (this includes checking the “frame” itself, but also any cushions, since those can “deflate” very rapidly if the manufacturer skimped there, and therefore become hard and just as uncomfortable as a bent chair), and so on. At this point, the same advice applies as shopping for mattresses, work boots, or any other sort of tool that you’re using all day every day. It’s only at this point where the differences between more relaxed postures, “work” postures and “gaming” postures starts really mattering, and I’d say if you actually spend 6-8 hours per day on average on this chair, you definitely want to go for the best you can get. How much that needs to cost, unfortunately, isn’t a known quantity; it depends very heavily on your body size, shape, mass, leg/torso ratio, how you normally move and a bunch of other things… so there’s a lot of hit-and-miss, unfortunately, unless you have access to the services of a professional in office ergonomics. Even then, I can’t myself speak for how much a professional would help.
It makes a difference—I now have a good, high quality chair that cost over 250€ (not from my own pocket) and it’s close to perfect—I can recline it to a comfortable position that is not possible with an “ordinary office” chair (I used to break them down on a regular basis). Despite being advertised as “super-resistant”, this one already broke twice (covered by warranty). And when I had to sit on an “ordinary office” chair, I found out that I cannot work for more than an hour or two before I get serious pain in my back—this seems to be related to the monitor being beneath the eyes and the inability to recline—I (like to) have the monitor exactly at the eye level and looking slightly upwards.
Could you post a link to the kind of chair that you got?
It’s not quite what I have (this one is some year old model), but seems close: link here
I either misremember the price, or it went down significantly (or the combination...)
I want to extend this to mattresses. About a third of my time is spent sleeping, how much can I spend before marginal returns kick in?
As far as mattresses go, it’s important to note that it’s not all about price. When I read a guide by a German consumer advice group they made the point that it’s important to actually test the mattress in person to see how it fits your individual preferences.
Before of Other Optimizing here. You’re going to see a lot of “This mattress is the best thing I’ve ever slept on!,” and it may not be the case for you. Second Christian’s advice to actually go into a store and sleep on a mattress.
I bought this and it’s amazing. I was sleeping on a $900 spring mattress, and this is so much better in every respect. It’s held up for 1.5 years, now, and is just as nice as the day I got it.
That looks really nice. Makes me want to research durability some more and to compile a list of things to spend money on, inspired by the recent post on a similar topic.
My father is one of the patent examiners for mattresses. I brought him along the last time I bought a mattress. His recommendation was like ChristianKl’s: try different mattresses and see what’s comfortable. Cost and comfortableness are not necessarily related. Whether or not you find it comfortable in the store is the best indication of whether you’ll find it to be comfortable at home. Pick the cheapest one you find comfortable. With that being said, you might find some more expensive mattresses last longer, though he indicated that most mattresses are designed to wear out around the same time. Also, he’s highly skeptical of the value of memory foam and other things you see on TV, so don’t think those things are necessarily better.
For what it’s worth, he sleeps on a waterbed. I am unsure, but I think the choice might be motivated by my mother’s allergies; waterbeds can’t absorb allergens by their design.
Mattresses aren’t the only thing you can sleep on. I’d consider picking up and installing a hammock—they’re not only cheap (~$100 for a top of the line one, $10 and 2 hours for making your own), but they also give you significantly more usable living space.
Most people like to have a bed they can have sex in though
You can always have a hammock in addition to, rather than instead of, a traditional bed. Or you can use the next-best piece of furniture for that purpose.
Yes, they may be more space-efficient, but isn’t it more important whether they damage your sleep quality?
I’ve found it to be very comfortable, though I have not been keeping data on sleep quality so I don’t have a quantitative answer.
If you’re already tracking sleep quality, trying a hammock out is much cheaper than trying a new mattress out.
It doesn’t take that much to get a memory foam mattress these days, and I get the impression it’s totally worthwhile. (I’ve had my Tempur-Pedic for a bit over 3 years now, and enjoy it quite a bit. I noticed, among other things, that I then started thinking of hotel beds, even in nice hotels, as bad.)
Not an answer, but I did discover kneeling chairs, because I am also in the market for a new chair. I’d try one with back support, but none of the reviews of the products on amazon compel me to make any purchases.
http://www.amazon.com/Office-Star-Ergonomically-Designed-Casters/dp/B002L15NSK/ref=nosim?tag=vglnk-c319-20
http://www.ncbi.nlm.nih.gov/pubmed/18810008
Do you spend a lot of time in front of the computer at home? (I assume this is for home use..) It might be worth it to optimize: http://en.wikipedia.org/wiki/Pareto_principle
I have always been able to find a comfortable computer (arm)chair for $100 or less, usually on sale at Staples.
Beware that you need to “try” these chairs, and you need to pay attention to clothing when you try them too. A chair that’s super comfortable with jeans and a winter coat might turn out to be an absolutely horrible back-twisting wedge of slipperiness once you’re back home in sweatpants and a hoodie. Or in various more advanced states of undress.
I put a blanket over my chair. It seems to work.
In practice, this is relevant once you’ve already bought a chair and want to maximize the comfort you can get from it, balanced against the difference of comfort you could buy & chance of getting that comfort (or some lower value, or some higher) & money you’d need to spend.
When purchasing a new chair, I don’t think this will be an important factor in the overwhelming majority of situations.
It would be convenient if, when talking about utilitarianism, people would be more explicit about what they mean by it. For example, when saying “I am a utilitarian”, does the writer mean “I follow a utility function”, “My utility function includes the well-being of other beings”, “I believe that moral agents should value the well-being of other beings”, or “I believe that moral agents should value all utility equally, regardless of the source or who experiences it”? Traditionally, only the last of these is considered utilitarianism, but on LW I’ve seen the word used differently.
Right. Many people use the word “utilitarianism” to refer to what is properly named “consequentialism”. This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn’t really work mathematically, if anyone wants me to explain further, I’ll write a post on it.)
But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.
Please do. I think it also would be valuable to refresh people’s memories of the difference between utilitarianism and consequentialism, and to show many moral philosophies can fall under the latter.
I tend to do that.
What is the difference? According to Wikipedia, Egoism and Ethical Altruism are Consequentialist but not Utilitarian. I think it might have something to do with your utility function involving everyone equally, instead of ignoring you or ignoring everyone but you.
Because of interpersonal utility comparisons, or what? That might affect some forms of preference utilitarianism. Hedonistic and “objective welfare” varieties of utilitarianism seem like coherent views to me.
Today’s SMBC is about an AI with a utility function which sounds good but isn’t.
Drat. I just came here to post that. Still, at least this time I only missed by hours.
After reading this main post, it dawned on me that the scary sounding change terminal goals technique is really similar to just sour grapes reasoning plus substituting them with lower hanging grapes, that would eventually get you to the higher hanging grapes you originally wanted.
I typically refrain from deluding myself to think that I don’t want what is hard to attain, because I know I really do want it. With sour grapes reasoning I can pretend to not want my original goal as much as I now want another more instrumental goal. I feel like this helps me cope and be more productive, instead of frustrating myself with hard to define terminal goals.
At first I thought that changing terminal goals would be kind of a hard mind hack to put to use, but now that I think about it, it’s actually quite easy to carefully delude myself. This hack doesn’t have to just apply to lofty terminal goals, it can apply to goals that are just simply not in your locus of control, like getting the job or making the team. Didn’t get that internship? “Pfft, I didn’t really want it anyway, I really just want to practice and learn these skills to be an awesome programmer.” Didn’t make the cut for this year’s team? Pfft, I really just want to have an awesome crossover and sweet jump shot.”
“A community blog devoted to refining the art of human rationality”
Of course people will be drawn to this site: Who does not want to be rational? Skimming around the topics we see that people are concerned with how to make more money, calculating probabilities correctly and to formalise decision making processes in general.
Though there is one thing that bothers me. All skills that are discussed are related to abstract concepts, formal systems, math. Or in general things that are done more easily by people scoring high on g-heavy IQ tests. But there is a whole other area of intelligence: Emotional intelligence.
I seldom see discussions relating to emotional intelligence, be it techniques of CBT, empathy or social skills. Sure, there is some, but far less than there is of the other topic. How do I develop empathy? How do I measure EQ? Questions that are not answered by me reading LessWrong.
Off the top of my head, some good top-level posts touching on this area: How to understand people better (plus isaacschlueter’s particularly good comment) and Alicorn’s Luminosity sequence. Searching gives maybe a partial match for How to Be Happy, which cites some studies on training empathy and concludes that little is scientifically known about it—still, I think a top-level post on what is known would be welcome. Swimmer963′s post on emotional-regulation research is nice.
Mindfulness is something else that comes up pretty regularly. Meditation trains metacognition and Overcoming suffering are pretty good examples.
CFAR also places more explicit emphasis on emotional awareness, and that sometimes comes up in the group rationality diaries.
I think one reason that these topics are relatively neglected is that people seem to develop social skills and emotional awareness in pretty idiosyncratic ways. Still, LW seems to accept more personal accounts, like this post on a variation on the CBT technique of labeling. So it seems worthwhile to post things along those lines.
People for whom rationality is an applause light or a club with which to bash enemies, but who balk at actually applying it to themselves.
People who have been taught that rationality is evil.
Emotional Intelligence has no predictive value beyond IQ and the Big Five, so that’s a dead end. (citations 42-44 here).
But that whole topic is what Living Luminously is about, and tends to be a theme in most of Alicorn’s other posts.
I’m guessing you meant to link refs. 45-47; they look more relevant.
Those are also relevant, but I definitely meant 42-44, the sources cited for this chunk
I agree, there is alot of talk about mathematics and formal systems. There is big love for Epistemic Rationality, and this is shown in the topics below. Some exceptions exist of course, a thread about what type of chair to buy stands out.
But I agree, Emotional Intelligence is a large set of skills underappreciated here, and I admit though I have some knowledge to share on the subject, I do not feel particularly qualified to write a post on it.
I wonder how many people we have that are knowledgeable on that subject. Maybe those who feel qualified to write such a post feel intimidated to do so. In that spirit I encourage you to start the tide and write about what you think is important.
I got a lot better at empathy from actively trying to understand people in contexts that 1) I wasn’t emotionally tied up in, 2) were challenging, and 3) had concrete success/failure criteria. It is a fun game for me.
The way I did this was to gather up a group of online contacts and when they’d have issues like “I want to be more confident with women” or “I want to not be afraid of speaking in class” I’d try to understand it well enough that I could say things that would dissolve the problem. If the problem went away I won. If it didn’t then I failed. No excuses.
I’ve gotten a lot better and it has been a pretty perspective changing thing. I’m quite glad I did it.
I don’t really see the point. On the first page of Discussion there currently “On Straw Vulcan Rationality” with is about the relation of rationality to emotions which has a lot to do with emotional intelligence.
There also “Applying reinforcement learning theory to reduce felt temporal distance”, “Beware Trivial Fears”, “How can I spend money to improve my life?” and “How to become a PC?”.
I think “On Straw Vulcan Rationality” illustrates the issue well. Here on Lesswrong there are people who actually think that Vulcans do things quite alright. In an environment where it’s not clear that one shouldn’t be a Vulcan it’s difficult to communicate about some aspects of emotional intelligence.
Recently asked for ways to find a career for himself but it it all in the third person instead of the first. My post suggesting that he should change to first person was voted down because it was to far out of LW culture. If I’m around people who do a lot of coaching changing someone who speaks in third person about his own life to first person to increase his agentship is straightforward advice. It’s a basic.
I had experience where encouraging a person to make that change produced bodylanguage changes that are visible to me because the person is more associated with themselves. On the other hand I’m hardpressed if you ask me for peer reviewed research to back up my claim that it’s highly useful to use the first person when speaking about what one wants to do with his life.
Not being able to rely on basics makes it hard to talk when on Lesswrong we usually do talk about advanced stuff.
I see your comments are downvoted quite often. They sometimes contain some element of emotion or empathy. If it was possible to view down- and upvotes seperately you’d see that my post garnered quite some downvotes, meaning that there actually are quite a few people who either think that the topic is well covered by LW or it does not have a place on LW. I obviously disagree with both positions.
You say you don’t see the point of doing this here on LW, can you then point me to a site where they ‘start at the basics’? I refuse to give in to the meme “being a Vulcan is perfectly fine”.
No, in that case I wouldn’t write the comments that are downvoted. I do have a bunch of concepts in my mind that I can use to do stuff in daily life. But my understanding is not high enough at the moment to reach academic levels of scrutiny.
I do have a bunch of mental frameworks from different context that I use. My main framework at the moment is somato-psychosomatic. From that framework there nothing published in English. But even if you could read German or French and read the introductory book I doubt it would help you. The general experience is that people who don’t have in person experience with the method don’t get the book.
Books are usually limited in teaching emotional intelligence. I have heared that there are good self study books for cognitive behavior therapy but I don’t have personal experience with them.
Nonviolent Communication is a fairly widely known framework. I can recommend http://www.wikihow.com/Practice-Nonviolent-Communication as an article that looks to me straightforward to understand.
To understand what other people are saying Schulz von Thun provides a model that’s quite popular in German (we learned it even in school): http://en.wikipedia.org/wiki/Four-sides_model
Next I do recommend mediation. It builds awareness of your own state of mind. I would recommend a teacher but if you just want to do it on your own I would recommend a meditation where you focus on something within your own body like your breath. If you are a beginner I would recommend against meditating by focusing on an object that’s external to your body. As far as sitting position goes, sitting still in a chair does it’s job. For beginners I would recommend against laying down.
Taking different positions does have effects but if you think that meditation is about sitting in lotus position, you focus on the wrong thing.
Emotions are something that happens in your own body. People usually feel emotions as something that moves within their own body.
But you also need some cognitive categorization to have an emotions. Fear and anticipation are pretty similar on a physical level but have other attached meaning. The meaning makes us enjoy anticipation and not enjoy fear. Both the meaning as well as the physical level are points of intervention where one create change.
If I personally have an emotion I don’t want to have I strip it of meaning and resolve it on the physical level. I think I do that through using qualia that I learned to be aware of while doing meditation. When talking in person it’s possible to see body language changes to verify whether someone switching to being aware of his emotion. It’s on the other hand nearly impossible through this medium to get an idea of what qualia other people on lesswrong have at a particular moment in time.
Girlfriends :-P
Someone suggests that planning is bad for success. There is very little research cited, however (there is one study involving CEOs). Is there more confirming / invalidating evidence for this idea somewhere?
Doing early prep work on my scientific review of Transcendence, I came across this amusing anecdote from Lab Coats in Hollywood:
Another clip:
On LW Wiki editing: in addition to the usual spam, I occasionally see some well-meaning but marginal-quality edits popping up on the side bar. I understand that gwern cleans up the spam, but does anyone have the task of checking bona fide edits for quality?
Does anyone know if there is/are narrative fiction based around the AI Box Experiment? Short stories or anything else?
http://lesswrong.com/lw/8qd/link_a_short_film_based_on_eliezer_yudkowskys_ai/ comes to mind.
Anyone care to elaborate on Why a Bayesian is not allowed to look at the residuals?
I got hunches, but don’t feel qualified to explain in detail.
To be a Bayesian in the purest sense is very demanding. One need not only articulate a basic model for the structure of the data and the distribution of the errors around that data (as in a regression model), but all your further uncertainty about each of those parts. If you have some sliver of doubt that maybe the errors have a slight serial correlation, that has to be expressed as a part of your prior before you look at any data. If you think that maybe the model for the structure might not be a line, but might be better expressed as an ordinary differential equation with a somewhat exotic expression for dy/dx then that had better be built in with appropriate prior mass too. And you’d better not do this just for the 3 or 4 leading possible modifications, but for every one that you assign prior mass to, and don’t forget uncertainty about that uncertainty, up the hierarchy. Only then can the posterior computation, which is now rather computationally demanding, compute your true posterior.
Since this is so difficult, practitioners often fall short somewhere. Maybe they compute the posterior from the simple form of their prior, then build in one complication and compute a posterior for that and compare and, if these two look similar enough, conclude that building in more complications is unnecessary. Or maybe… gasp… they look at residuals. Such behavior is often going to be a violation of the (full) likelihood principle b/c the principle demands that the probability densities all be laid out explicitly and that we only obtain information from ratios of those.
So pragmatic Bayesians will still look at the residuals Box 1980.
As a counterargument to my previous post, if anyone wants an exposition of the likelihood principle, here is reasonably neutral presentation by Birnbaum 1962. For coherence and Bayesianism see Lindley 1990.
Edited to add: As Lindley points out (section 2.6), the consideration of the adequacy of a small model can be tested in a Bayesian way through consideration of a larger model, which includes the smaller. Fair enough. But is the process of starting with a small model, thinking, and then considering, possibly, a succession of larger models, some of which reject the smaller one and some of which do not, actually a process that is true to the likelihood principle? I don’t think so.
I published an article titled The Singularity and Mutational Load in h+ magazine about using eugenics to increase intelligence by reducing harmful mutations. The best way to create friendly AI might be to first engineer super-genius into a few of our children.
Keep in mind that many people will read this as “I hope we start killing inferior people”.
(And note that your first use of “eugenics” in the piece is before any sort of discussion about methods, or anything that would rule out coercion.)
I had the word “eugenics” in the title but the editor took it out because of a possible negative reaction. (It’s standard practice for editors to change titles so he certainly did nothing wrong.)
That’s rather like a premise from Heinlein’s Beyond This Horizon, which is not an argument against it, just a historical note.
The idea seems plausible enough to be worth testing in animals. I don’t feel very sure about how much it would contribute to a positive Singularity, but it also doesn’t sound like it would increase risk significantly.
Approximately how long do you think it will take for reducing mutational load to come into common use?
Yes, but we don’t want to overshoot and get a Planet of the Apes situation, or even worse create smart, fast breeding creatures.
I think our ability to keep mice confined in labs is up to the challenge, even very healthy and relatively intelligent mice......um, except for the risk of lab staff taking the mice home for pets or to win mice shows or to sell to journalists or something.
Even if the mice get out due to the staff having excessive mutational loads, I think you’d get rapid reversion to the mean when the edited mice bred with wild mice.
Lab mice’s brains are noticeably smaller than those of wild mice, primarily because they are horrifically inbred (and need to be for a lot of the genetic experiments to work properly).
There are similar issues with most of the lab organisms. My lab yeast that have been grown continuously in rich media with odd population structure (lots of bottlenecks) since the eighties have about a third the metabolic rate of wild isolates, and male nematodes of the common laboratory strains can hardly mate successfully without help.
So editing the genome for wild mice and lab mice would get very different results.
How do you help a nematode mate?
Actually you don’t technically help them mate, you just make a strain that can’t reproduce via hermaphrodites self-fertilizing. You keep the males from being out-bred that way.
C. elegans has male and hermaphrodite sexes, not male and female. The hermaphrodites self-fertilize slowly to produce a few hundred hermaphrodite offspring, while mating with a male gives them many times as many offspring with half being male. But the lab-bred males are so bad at mating that even if you have a population that’s half male, they get massively outbred by the hermaphrodites selfing, and over a very few generations maleness just falls out of the population. You’ll wind up with about 0.1% of the population being male in the equilibrium due to the occasional hermaphrodite egg dropping an X chromosome during development (no Y chromosomes in this species, males just have one X), but they are continually diluted out by the hermaphrodites.
What you do is breed in a genetic change that makes the hermaphrodite’s sperm fail without affecting the male’s sperm, preventing selfing from producing any offspring. The occasional successful male mating is productive enough that they can still on average replace themselves and their partner and then some, it just has a much longer doubling time and thus when in competition with selfing gets diluted out.
More recent wild isolates can still mate well (and also show a lot of interesting social behavior you don’t see in the long-established lab strains) and their populations remain just under half male for a long time. Dunno what happens when you let the two populations mix.
EDIT: and just so you know, I upvoted ‘very carefully’
Oh, that’s what a “selfie” means… :-D
Very carefully. :)
More likely, if word gets out there may be a high demand for smarter transgenic mice as pets.
Hm, interestingly seems that something like this has been tried (see also here for a bit of a counterpoint).
Also: The intelligent mouse project
Excellent point.
You mean a Border Collie?
To date I think that is our highest achievement of selectively breeding for intelligence (with physical ability and minimum thresholds of obedience).
I looked for info on horse breeds that were breed for intelligence, Quarter Horse comes to mind, but turned up nothing in a two minute Google search.
Not a problem. Keep in mind that if you let creatures without mutational load breed naturally, the amount of mutational load will increase until it reaches equilibrium.
There is a mismatch when you cite Shulman-Bostrom for 1 in 10 selection raising IQ by 10 points. Most of your article is about mutational load and how you don’t need to understand the role of any particular mutation to know how to correct it, but that paper assumes knowledge of how genes affect IQ.
This is why the sentence that links to the Shulman-Bostrom estimate contains the qualification “(not just for minimizing mutational load)” . An estimate that just considered mutational load would have been better, but I didn’t have one.
Here, have an estimate: 0. It’s not a very good estimate, but it’s better than the one you used.
What transhumanist and/or rationalist podcast/audiobook do you prefer beside hpmor which I just finished and really liked!!
As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.
I also would like to reiterate what I said on PredictionBook: I don’t think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.
Thank you for the link.
Recently there were a few posts about using bikes as transportation. This left me curious. Who are the transportation cyclists at LessWrong? I am interested in hearing your reasons for choosing cycling and also about your riding style. Do you use bike infrastructure when available? Do you take the lane? I’m especially interested in justification for these choices, as choices in the vehicular cycling (criticism of vehicular cycling) vs. separate bike infrastructure debate don’t seem to always be well justified. (To outsiders, vehicular cyclists might be considered the contrarians among bicyclists.)
I ride in the bike lane most of the time, in the left half of it to be out of range of car doors. Depending on traffic, I often take the lane before intersections to avoid right-hook collisions. (My state’s driver’s handbook is pretty clear on drivers being required to merge into the rightmost (bike) lane before turning right but hardly anyone actually does this.) I also take the lane when making a left turn, and when there isn’t actually room for someone to pass me safely on the left but there might be room for a poor driver to think he can do so.
I don’t use bike paths much because (a) separate bike infrastructure doesn’t go most places I want to go and (b) when it does go where I want to go, separate bike infrastructure is often infested with headphone-wearing joggers who can’t hear my bell so I have to go very slowly or weave between them. When the joggers aren’t too numerous (e.g. if it’s raining) I do enjoy bike paths for recreation though.
I started biking for transportation when a friend gave me a bike that had been sitting in her basement for a year gathering dust. It turned out to be as fast as taking the bus and also a lot cheaper. I had a low income at the time, so frugality was a huge motivation, but it turned out to be fun as well. There’s also a great feeling of freedom in not having to check the bus schedule before you go somewhere. (For various reasons car ownership is not a viable option for me, though I’m thinking of getting a zipcar membership.)
My first transportation bike was a 40lb mountain bike, but when I moved to a hilly city this year the weight was a problem. I didn’t shop around much for a replacement, just got the first road-bike-like-thing I found at a garage sale. It has upright handlebars but otherwise appears to be a standard road bike (except for being 40 years old and French and having nonstandard bolt sizes, but what do you expect from a yard sale?) and I’m very happy with it. I can go straight up hills where I used to have to get off and walk.
I suppose I am getting health benefits from biking, or at least it seems to be getting easier with time, but exercise isn’t really a goal for me. I rarely bike fast enough to get tired or out of breath.
I cycle as my main form of transport around where I live (in the UK, so a bunch of this may be weird to you US people). Most common journey is to work and back (~1.5 miles, takes me about 10 minutes on the way there and 15 or so on the way back due to hills). I do this every weekday and also cycle to leisure/hobby locations, supermarket, etc.
Reasons for choosing cycling:
Habit. It’s been my main form of transport for about 6 years now and I cycled a fair bit before that too.
It’s free other than the initial cost of the bike (and I would want to own a bike even if it wasn’t my main form of transportation) and occasional maintenance costs. Overall, over the lifetime of the bike, unbeatably cheap.
It’s a lot quicker than walking (especially downhill!). It’s also a lot quicker than driving over the short distances that I mostly cover, on roads that are often blocked up with traffic that I can easily cycle around. On most of the routes I regularly cycle, it’s far quicker than any of the public transport options too, especially if you count waiting time.
It’s a lot better for the environment than driving.
It’s a good way to incorporate a little bit of extra activity into my day.
It’s easy to park a bike, virtually anywhere, for free. Most places I cycle to are in the middle of a city and parking the car there would be either prohibitively expensive or, more likely, impossible.
It’s flexible. I can jump on my bike at a moment’s notice and go from door to door rather than having to faff around defrosting the car, checking that it has petrol, finding somewhere to park, etc etc, or waiting for a bus.
If I’m lost, it’s dead easy to stop at the side of the road and check where I’m trying to get to, and I can walk back along the pavement if it turns out I’m on the wrong track. These things are often not easy when driving!
I enjoy the opportunity to spend a little bit of time outdoors just about every day; I feel it creates a nice gap between activities/work/etc. Of course I moan like crazy about this when it rains heavily, but I still do it.
I certainly do cycle on bike paths where they’re available, but nearly all my regular routes are just on primarily residential streets. Sometimes there are bike lanes in the road, which is fine and obviously I ride in them, but it doesn’t make me feel that much safer as they are shared with buses and often contain parked cars that are liable to open their doors without warning. Depending on the type of road, the situation, and the turn I’m about to take next, I either ride most of the way over to the left (staying out of cars’ way but not rubbing right up against the kerb, and looking ahead to pull out around a parked car if necessary) or take the lane (if there’s not room for a car to reasonably overtake me, if I’m riding at/near the speed limit on a steep downhill, if I’m about to turn right).
I generally feel fairly safe while cycling. I wear a helmet 95% of the time, and use lights at night (which cyclists legally must here). I’m normally a fairly defensive/paranoid cyclist: I slow down if I’m not sure what a car is doing, I practically insist on eye contact with the driver before I will cycle across someone waiting to turn out of a side street, I always look over my shoulder, I don’t run through red lights, etc. I’ve had about 3 “near misses” in the last 6 years of cycling virtually every day, all caused by cars that looked straight at me but did not see me. No actual accidents.
If you are going to naively follow a system in America,* vehicular cycling is safer than naive use of car lanes, which is safer than bike lanes, but far better than these systems is to understand the source of the danger, to know when bike lanes help you and when they hurt you, to know when it’s important to draw attention to yourself and how to do it.
I think that there is some very important context missing from that critical article you cited. “Bicycle lanes” means something very different in the author’s Denmark than Forester’s America. Bike lanes in America are better than they used to be, but in the past their main effect was to kill cyclists. As a bicyclist or pedestrian, it is very important to learn to disobey traffic laws. They are of value to you only as they predict the actions of the cars. What is important it to pay attention to the cars and to know how the markings will affect them. The closest I have come to collisions, as a pedestrian, as a bicyclist, and even as driver, is by being distracted from the real danger of cars by the instructions of lane markings and traffic signals.
* and probably the vast majority of the world. The Netherlands and Denmark are obvious exceptions. Perhaps there are lots of countries where basic bike lanes are better than nothing.
You are right. It is important to recognize that the law and safety may not overlap, especially in states where use of bike infrastructure is required by law.
Yes, this is a good point. There are other cultural differences in Denmark that are relevant as well, primarily that cyclists and drivers are more willing to follow the law. For example. I have read the cyclists running red lights is not a significant issue in continental Europe, while in North America and the UK it’s fairly common.
I highlighted that article mostly because its reasoning is very common for bike infrastructure proponents. Bike infrastructure proponents tend not to talk about safety directly. What they do talk about is increasing the number of cyclists, and they criticize vehicular cycling as unable to do this. The critical article’s author Mikael Colville-Andersen writes: “There is nowhere in the world where this theory [vehicular cycling] has become practice and caused great numbers of citizens to take to the roads on a daily basis.”
Vehicular cyclists tend to focus squarely on safely cycling and don’t seem to mind too much that few people cycle. When bike infrastructure advocates discuss safety, usually it’s in the form of a cherry picked study, the “safety in numbers” effect, or perhaps admitting “It doesn’t matter if bike lanes make you safer or not!” (this is a real quote!).
The cherry picked study I mention seems to have multiple issues, though I admit I have not looked closely at it. Some vehicular cyclists have said the cycletracks in the study had few intersections (I haven’t verified this). The study also suggests that intersections are slighter safer than straight segments of road. Basically all other research I’ve seen suggests that intersections are much more dangerous, which makes me not trust this study. I think this result might be due to their strange control strategy, though I’m not sure.
I have never seen a detailed analysis of all bike safety issues, combining the safety in numbers effect with the other known issues. My thinking was that cyclists on LessWrong would be more informed in these areas, and I’d be interested in hearing their reasoning. Perhaps I’ll have to do my own analysis of all the different effects in combination.
Infrastructure can work, but it’s good to know where it works best (probably higher speed areas), what is necessary for it to be and seem safe, and also what’s cost effective. I’ve discussed with a bike advocate before that they shouldn’t focus too much on expensive infrastructure projects, and they’d do better to lower speed limits and add speed control features to certain roads.
I cycle as my main form of transportation. I chose cycling partly to save money and partly for exercise. I ride a flat bar touring bike with internal hub gears. I ride in a vehicular style, following the recommendations of “Cyclecraft” by John Franklin. This helps acheive the exercise goal, because vehicular cycling is impossible without a good level of fitness.
I’ll use high quality infrastructure when it’s available, but here in the UK most cycle infrastructure is worse than useless. We have “advisary cycle lines” in which cars can freely drive and park, so their only function is to promote conflict between cyclists and drivers. We have “advanced stop lines” at junctions which can only be legally entered through a narrow left-side feeder lane, placing the cyclist at the worst place posible for negotiating the junction. We have large numbers of shared use cycle paths which are hated by both cyclists and pedestrians.
I’d prefer to live in the Netherlands where high quality infrastructure is common. I have no confidence that the UK government can provide similar infrastructure here. Most politicians have no understanding of utility cycling and design facilities only considering leisure cycling. There’s a big risk that if some minor upgrades are provided cyclists will be compelled to use them, resulting in a network that’s less useful than the existing roads.
Infrastructure quality is a major issue. I don’t mind infrastructure at all as long as it is done well. Most of the infrastructure I have seen is not done well.
The infrastructure we have here in the US tends to be terrible, though perhaps for different reasons than in the UK. As an example, consider the recent cycletrack in where I live, Austin, TX. This cycletrack is a disaster as far as I’m concerned. Local bike advocates say that it’s Dutch style infrastructure, but it really isn’t. In the Netherlands, the intersections are separated with a bikes-only part of the light cycle. The current setup has no such separation, and encourages conflicts with motorists as far as I can tell. This is particularly bad where the cycletrack ends, as the road markings make cars and bikes cross, and drivers basically never yield or even look as they are required to. I just ride in the normal lane unless I’m stopping off somewhere on the cycletrack.
I had no idea vehicular cycling was a thing, but most of the recommendations on the wikipedia page are commonly accepted as good cycling safety when there’s no bike lanes—and around here bike lanes are rare. I’ll use bike lanes if they’re available and clear of obstructions, and I won’t take a lane unless the lane’s too narrow to share (like on a bridge or in construction) or unless I can keep up with traffic. I always signal, use turning lanes, stop at lights and stop signs, etc, as expected by the MTO guidelines. I ride a hybrid bicycle instead of a road bike because of cost, posture, and the condition of the roads.
As for why? Health benefits, pleasure, and I arrive at work more awake and alert.
A RationalWiki article on neoreaction, by the estimable Smerdis of Tlön. Also see his essay. I found this particularly interesting, ’cos if I’d picked anyone to sign up then Smerdis—a classical scholar who considers anything after 1700 dangerously modern—would have been a hot prospect. OTOH, he did write one of the finest obituaries I’ve ever seen.
(bolded part mine)
Shouldn’t this part be uncontroversial? Brains are expensive.
“Beyond mere statistical assertion” So his response to “All the statistics show racial IQ differences” is simply to say “that’s irrelevant unless you have a concrete theory to explain why that happened”? A moment’s reflection dismissing something as an “unlikely scenario” is exactly the opposite of how science should be done.
Beware of identifications and tautologies.
If there is a single variant with large effect, like torsion dystonia, then its appearance in one group is likely due to different tradeoffs. But if IQ is driven by mutational load, populations might differ in age of reproduction and thus in mutational loads without having different tradeoffs between traits. In the long run, elevated mutational load should select for simplified design, but that could be a very long run.
Yeah, that assertion also looks obviously true to me—heck, high intelligence seems to be maladaptive in current Western society!
and the obvious horror scenario
I guessed you had linked to Idiocracy.
That’s not exactly about intelligence being maladaptive, though.
Of course, but the distinction isn’t useful in this context. Proxies for intelligence, like large heads, became maladaptive, so intelligence itself declined along with cranial size. It remains valid for the original argument—that the assertion that for some groups, large craniums (or other traits that augment intelligence) may have become a liability, isn’t controversial.
Have certain human societies been less full of complicated humans since the Toba bottleneck? Remember that human genetic diversity is quite low compared to other species.
Yes, evidently the ones with the lower IQs.
Even if the Machiavellian intelligence hypothesis is correct, it isn’t invulnerable to selective forces pushing in the other direction, like parasite load, lack of resources, small founder populations, island dwarfism, and so on. We’ve seen the Flores hominids, we know it happened.
Human intelligence won’t miraculously keep increasing in any and all environments. Lack of genetic diversity doesn’t factor into it.
As with much on rationalwiki, it’s just dismissive rather than a logical argument or evidence. We have clear evidence of relatively recent genetic influences on human evolution in Lactose Tolerance and both Tibetan and Andean adaptations for high altitude. Not to mention HBD isn’t an attempt to “preserve” the diversity but to actually acknowledge it.
That’s precisely the author’s point: the two usages are different enough that using the same word looks like a cheap rhetorical trick.
shrug. That’s at best a nitpick. It’s a minor side issue to whether what HBD proponents talk about is actually true or if true how it’s relevant. Everyone is guilty of all sorts of cheap rhetorical tricks. One could even say that attacking a movement, the implications of which are potentially EXTREMELY important on a semantic point is a rhetorical trick, and not an expensive one at that.
Would a (hypothetically) pure altruist have children (in our current situation)?
I don’t think that knowing someone is an altruist tells you much about his moral framework.
The phrase “in our current situation” is also weird given that there are plenty of readers who are in substantial different situations from each other.
Let’s be more narrow and talk about middle-class professional Americans. And lets take a pass on the “pure altruist” angle, and just talk about how much altruistic good you do by having a child (compared to the next best option).
For having a child, it’s roughly 70 QALYs that they get to directly experience. Plus, you get whatever fraction of their productive output that’s directed towards altruistic good. There’s also the personal enjoyment you get out of raising children, which absorbs part of the cost out of a separate budget.
As far as costs go, a quick google search brings up the number $241,000. And that’s just the monetary costs—there are more opportunity costs for time spent with your children. Let’s simplify things by taking the time commitment entirely out of the time you spend recreationally on yourself, and the money cost entirely out of your altruism budget.
So, divide the 70 QALYs by the $241k, and you wind up with a rough cost of $3,400 per QALY. That completely ignores the roughly $1M in current-value of your child’s earnings (number is also pulled completely out of my ass based on 40 years at $60k inflation-adjusted dollars).
So, the bottom line is whether or not you enjoy raising children, and whether or not you can buy QALYs at below $3,400 each. There’s also risks involved—not enjoying raising children and having to reduce your charity time and money budget to get the same quality of life, children turning out with below-expectation quality of life and/or economic output, and probably others as well.
There’s also the question of whether you’re better off adopting or having your own, but that’s a separate analysis.
Previous discussion: http://lesswrong.com/lw/ive/is_it_immoral_to_have_children/
Doubtful. The pure altruist would concentrate all their efforts on the single activity with the highest marginal social return. Several times per day that activity would be eating, because eating prevents a socially beneficial organism from dying. Eating has poor substitutes, but there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).
Not all children are of equivalent social benefit. If a pure altruist could make a copy of themselves at age 20, twenty years from now, for the low price of 20% of their time-discounted total social benefit—well, depending on the time-discount of investing in the future, it seems like a no-brainer.
Well, unless the descendants also use similar reasoning to spend their time-discounted total social benefit in the same way. You have to cash out at some point, or else the entire thing is pointless.
Sure, your children can be altruists, but would raising your children have highest marginal return? You only “win” by the amount of altruism your child has above the substitute child. So if you’re really good at indoctrinating children with altruism, you would better exploit your comparative advantage by spending your time indoctrinating other people’s children while their parents do the non-altruistic tasks of changing diapers, etc. Children are an efficient mechanism for spreading your genes, but not the most efficient mechanism for spreading your memes.
Depends on the utility function the altruist uses.
I agree with christian that the question is poorly formed. For one thing, it depends on if the altruist believes in eugenics and has good genes, or does but has bad genes, or doesn’t etc. An altruist who was healthy and smart and believed in eugenics might try to spread their genes as far and wide as possible, which could result in lots of unprotected sex and kids that don’t have a parent! Another question is, what if they’re an anti-natalist? Anti-natalism can be a fundamentally altruistic position.
I don’t think the answer of a question about the morality of actions depends of the beliefs of the people involved (you can probably construct edge cases where it does); the answer to “Is it okay for bob to rape babies?” doesn’t depend on bob’s beliefs about baby-rape.
Note that the original question wasn’t “Is it right for a pure altruist to have children?”, it was “Would a pure altruist have children?”. And the answer to that question most definitely depends on the beliefs of the altruist being modeled. It’s also a more useful question, because it leads us to explore which beliefs matter and how they effect the decision (the alternative being that we all start arguing about our personal beliefs on all the relevant topics).
No. The mythical creature consults the Magic 8 ball of “Think a minute” which says “Consequences fundamentally not amenable to calculation, costs quite high” and goes and takes soil samples/inspects old paint to map out lead pollution in the neighborhood instead. Removing lead pollution being far more certain to improve the world.
Having kids is not an instrumental decision. One does not have kids for “the sake of the future” or any such nonsense—trying that on would likely lead to monumental failure at parenting. One has kids because one is in a situation in which one believes one can do a good job of parenting, and one wishes to do so.
What are your thoughts on AGI data requirements?
It is often cited that one of the reasons for the slow development of an AGI is the amount of computing power and space required to process all the information.
I don’t see this as a major roadblock as it would mainly give the AGI a broader understanding of the world, or even make a multi-domain expert system that could appear to be an AGI.
Assuming the construction of an AGI turns out to be an algorithmic one, it should be able to learn domains as it needs them. What sort of data would you use to test a newly built AGI algorithm?
You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?
I’m dealing with a bout of what I assume is basically superstition. Over the last 10 years, I’ve failed disastrously at two careers, and so I’ve generalized this over everything: I assume I’ll fail at any other career I want to pursue, too.
To me, this isn’t wholly illogical: these experiences prove to me that I’m just not smart or hard-working enough to do anything more interesting than pushing paper (my current job). Moreover, desirable careers are competitive practically by definition, so failing at every other career I try is an actual possibility.
Theoretically, perhaps I just haven’t pursued the career I’m “really” talented at, but now I’m far too old to adequately pursue whatever that might be. (There’s also the fact that sometimes I feel so discouraged that I don’t even WANT to pursue a career I might like ever again, but obviously that’s a different issue.)
I obviously don’t want to be one of those mindless “positive thinking” idiots and just “go for it” and “follow my heart” and all that crap. And I assume you guys won’t dish out that advice. But am I overreacting here? Is it in fact rational to attempt yet another career, or is it safe to assume any attempt will most likely fail, and instead of expending energy on a losing battle, I may as well roll over and resign myself to paper-pushing?
In the context of a costs-benefits analysis, what are your costs in trying another career?
Purely financially speaking, the costs of a career transition could range from opportunity costs to hundreds of thousands of dollars in debt if I decide to get a masters or something. Opportunity costs would be in the form of, say, foregoing income I would get from more intently pursuing my current field (e.g. becoming a paralegal, which is probably the most obvious next step) instead of studying another field and starting all over again with an entry-level position or even an unpaid internship.
Although less pay might sound rather benign, the idea of making less than I do now (which isn’t much) is rather horrifying. But then again, so is being condemned to a lifetime of drudgery. And I could potentially make a lot more in a different career track, meaning I would break even at some point.
Between those two extremes include things like spending several hundred dollars on classes or certifications.
I’m not married and don’t have kids, so luckily those sorts of concerns don’t enter the equation.
The emotional costs of trying another career (if you want to take those into account as well) would be utter heartbreak from failing yet again (if that’s how it turns out).
Another intangible cost is leisure time. Instead of pursuing another career on my off-hours, I could be playing video games, sleeping, working out, hanging out with friends, etc. Although those things might seem like not a big deal to sacrifice in the name of a better career, after wasting years of my life on fruitless pursuits, I can’t help but feel that “hard work” just isn’t worth it. I may as well have been playing video games that whole time.
We’re not speaking generally. You will have to make a decision about your life so you need to estimate your costs for a specific career move that you have in mind.
You need to focus and get specific.
There’s no point in getting specific if I think I’ll fail at anything I try to do.
Have you considered that you might be clinically depressed?
No one succeeds constantly. Success generally follows a string of failures.
lol I almost added a sort of disclaimer addressing that. Yes, I am definitely clinically depressed—partly due to my having failed so epically, imo, but of course I’d say that. ::eyeroll:: However, I don’t see the benefit in just discounting everything I say with the statement “you’re depressed.” Not that you did, but that’s the usual response people usually seem to give.
Yeah, so they say. But you have to admit that the degree of success and the length of strings of failures are quite different for each person. If that weren’t true, then every actor would be a movie star. Moreover, success is never guaranteed, no matter how many failures you’ve endured!
So, um, don’t you want to try to fix that? Until you do your judgement of your own capabilities is obviously suspect, not to mention that your chances to succeed are much diminished.
Sigh, well, I’ve been trying to fix it for about ten years (so as long as I’ve been failing. Coincidence?? Probably not). I’m on 2 anti-depressants right this minute (the fourth or fifth cocktail of which I’ve tried). I’ve gone through years of therapy. And the result? Still depressed, often suicidally.
So what else am I supposed to do? I refuse to go to therapy again. I’m sick of telling my whole life story over and over, and looking back on my past therapists, I think they were unhelpful at best and harmful at worst (for encouraging me to pursue my ludicrous pipe dreams, for instance). Moreover, talk therapy (including cognitive behavioral therapy, which some say is the most effective form) is, according to several meta-studies I’ve looked at, of dubious benefit.
I could try ECT, but apparently it has pretty bad side-effects. I’ve looked into submitting myself as a lab rat for deep brain stimulation (for science!), but haven’t been able to find a study that wouldn’t require quitting my job and staying somewhere across the country for two months. So here I am.
But if we can sidestep the ad hominem argument for a moment, it sounds like you’re saying that my aversion to failing at something else is irrational. Would you mind pointing out the error in my reasoning? (This sort of exchange is basically cognitive behavioral therapy, btw.)
That’s really irrelevant at this point. If you are clinically depressed, this is sufficient to explain both your failures and your lack of belief in your ability to succeed.
I am not a doctor and don’t want to give medical advice, but it seems to me that getting your depression under control must be the very first step you need to take—before you start thinking about new careers.
If my depression does explain my failures, then I really am pretty much destined to fail in the future since this appears to be treatment-resistant depression and as I described, I’ve run out of treatment options. Thanks anyway.
I agree with Lumifer that your priority should be treating your depression. Also, consider that your depression likely is making you pessimistic about your prognosis.
For about 5 or 6 years I had a treatment resistant fungal infection. I had to try 6 different antifungals until I found one that had some effect, and I tweaked the dosage and duration for most of those to try to make them work better. The last medication I tried didn’t work completely the first time, so I increased the dosage and duration. That totally wiped out the fungus. If you asked me if I ever thought I’d get rid of the fungal infection 6 months before I finished treatment, I’d have said no.
Knowing which antifungal medications didn’t work actually was the key to figuring out what did work. My doctor selected an antifungal medication which used a mechanism different from that of any other treatment I tried. I suggest that you look at which mechanisms the drugs that you have tried use and see what other options exist. There are many more depression treating drugs than antifungal drugs, and many more mechanisms.
You mentioned a few other non-pharmaceutical options you’ve considered. If you haven’t already considered it, I might suggest exercise. There seems to be reasonable evidence that exercise helps in depression. Anecdotally, I’ve read of several people who have claimed that running in particular cured their depression when nothing else provided much help. (I’ve suggested this to others before, and they generally think “That’ll make me feel worse!” People generally seem to discount the idea that as they get into better shape, exercise will become easier, enjoyable even.)
Yes, I’ve heard this before, but I don’t see why any reasonable, non-depressed person would be pessimistic about it. As I’ve said, it’s not like this is the first time I’ve ever been depressed in my life and I’m irrationally predicting that I can’t be cured. And I’ve heard stories like yours before: people who were depressed until they found the right combination of medications. But in my situation, my psychiatrists have gone back and forth between different combinations and then right back around to the ones I already tried. Changing them up YET AGAIN just feels like shuffling the deck chairs around on the Titanic (but of course I’d say that). If there are tons more different medications to try as you assert, none of my psychiatrists seem to know about it.
To be fully clear, anti-depressants have had an effect on me. I definitely don’t feel unbearably miserable and anxious as I do without them. They just haven’t gotten me to 100%.
I GUESS I could ask my psychiatrist to try yet another combination I haven’t tried before. But it just sounds like a nuisance, frankly.
As for exercise, yes, I’ve heard that countless times. I used to be much more active, and don’t recall it ever having a palpable effect on my mood. Nowadays, it’s just not gonna happen. I’ve tried to get myself to exercise, with some occasional success, but with my work schedule, when it finally comes time to do it, I flatly refuse. You could say, “you just gotta find something you enjoy!!” But I’m depressed! I enjoy nothing! (/sarcasm) I guess I could make sure to make time for hiking (probably what I enjoy the most) or get a membership at an expensive gym near work (which would be the most convenient arrangement for me) but the fact that exercise never particularly had an effect on me makes me not particularly motivated to do so.
Wikipedia lists many. I count 21 categories alone. I would suggest reading at least a bit about how these drugs work to get some indication of what could work better. Then, you can go to your psychiatrist and discuss what you’ve learned. Something outside of their standard line of treatment may be unfamiliar to them, but it may suit you better.
For my last antifungal treatment, I specifically asked for something different from what I had used before and I provided a list of antifungal meds I tried, all of which were fairly standard. My doctor spent a few minutes doing some searches on their computer and came back with what ultimately worked.
Huh, interesting. Up-managing one’s doctor seems frowned upon in our society—since it usually comes in the form of asking one’s doctor for medications mentioned in commercials—but obviously your approach seems much more valid. Kind of irritating, though, that doctors don’t appear to really be doing their job. :P
The exchange here has made me realize that I’ve actually been skipping my meds too often. Heh.… :\ So if I simply tighten that up, I will effectively increase my dosage. But if that doesn’t prove to be enough, I’ll go the route you’ve suggested. Thanks! :)
SSRIs, for example, aren’t supposed to do anything more than make you feel not completely miserable and/or freaked-out all the time. They are generally known to not actually make you happy and to not increase one’s capability for enjoyment. If you are on one, and if that’s a problem, you might actually want to look at something more stimulant-like, i.e. Bupropion. (There isn’t really another antidepressant that does this, and it seems unlikely you’ll manage to convince your psychiatrist to prescribe e.g. amphetamines for depression, even though they can work.)
And then there is, of course, all sorts of older and “dirtier” stuff, with MAOI’s probably being something of a last resort.
Yeah, that accurately describes their effect on me.
I used to be on Buproprion, but it had unpleasant physical effects on me (i.e. heart racing/pounding, which makes sense, given that it’s stimulant-like) without any noticeable mood effects. I was quite disappointed, since a friend of mine said he practically had a manic episode on it. However, I took it conjunction with an SNRI, so maybe that wouldn’t have happened if I’d just taken it on its own.… Idk.
I’m actually surprised my psychiatrist hasn’t recommended an MAOI to me in that case, since she freaks the hell out when I say I’m suicidal, and I’ve done so twice. I’ll put MAOIs at the bottom of my aforementioned new to-do list. :)
As far as depression goes curetogether has a list of things that it’s users found helpful.
I don’t think gyms are ideal. Going to the gym feels like work. On the other hand playing a team sport or dancing doesn’t. At best a weekly course that happens at a specific time where you attend regularly.
It seems you’re playing a “Yes, but” game. I am sure you can win it, do you really want to?
Yes. :)
But...
;-)
You should check out my response to one of the other comments—I think it’s even more “yes, but”! I kind of see what you mean, but it sounds to me like just a way of saying “believe x or else” instead of giving an actual argument.
However, the ultimate conclusion is, I guess, just getting back on the horse and doing whatever I can to treat the dysthymia. I’m just like… ugh. :P But that’s not very rational.
Thanks for the feedback.
Many of the things that you have said are characteristic of the sort of disordered thinking that goes hand-in-hand with depression. The book Feeling Good: The New Mood Therapy covers some of them. You may want to try reading it (if you have not already) so that you will be able to recognize thoughts typical among the depressed. (I find some measure of comfort from realizing that certain thoughts are depressive delusions and will pass with changes in mood.)
As a concrete example, you said:
These are basically the harshest reasons one could give for failing at something. They are innate and permanent. An equally valid frame would be to think that some outside circumstance was responsible (bad economy, say) or that you had not yet mastered the right skill set.
I am thoroughly familiar with Feeling Good and feel that I can argue circles around it. My original statement (that I’ll fail at everything) is an example of “overgeneralization” and “fortune telling.” But this sounds to me like just a way of stating the problem of induction: nothing can ever be certain or generalized because we don’t know what we don’t know etc. etc. However, science itself basically rests on induction. If I drop a steel ball (from the surface of this planet), will it float, even if I think positively really hard? No. It won’t. Our reason makes conclusions based on past evidence. If past evidence suggests that attempts lead to failure, why ISN’T it reasonable to assume that future attempts will lead to failure? Yes, the variables will be different, I guess, but it’s still a gamble. If you think I should give it a go anyway, then you may as well advise me to buy lottery tickets, imo. And I just can’t dredge up the sufficient motivation to pursue something with chances like that.
Kind of funny that you suggest blaming external forces instead of taking personal responsibility, but okay. I would say the latter is the case for me: I did not master the sufficient skill set, even after ten years or whatever. The people who are successful in my field do so MUCH earlier. So, okay, I didn’t master the right skill set. I don’t see how that’s supposed to make me feel any better. It doesn’t change my shitty situation. And it only makes me question, well why didn’t I? I wanted to; I attempted to. Clearly, I did something wrong. I either don’t have sufficient talent at my field or talent at learning to have mastered those skills.
But those are innate and permanent traits, which you (and many others) apparently consider invalid, which I don’t really get, but I’ll accept it for the moment. So due to non-innate and temporary faults, I failed to achieve my objectives. Again, how is this supposed to make me feel better? Because I’m supposed to believe those faults have mysteriously vanished, or I can work to improve them? Even if that’s so, the rewards that are reasonable to expect from attempting to improve them seem so minimal at this point that, again, it doesn’t seem worth bothering about. I’m willing to concede that this is depressive thinking, but it seems to me more like a difference of opinion than disordered reasoning: some people think hard work with little reward or low chance of a big reward is fun; I do not. It’s no different that my hating a movie you like and vice versa.
What side effects make ECT worse for you than depression and the risk of death? Have you looked into ketamine trials?
I asked that of someone else when they made the same statement about ECT. The most common side effect is memory loss. You prompted me to look into the details, and I guess I wouldn’t mind losing a couple months of memory (usually the only permanent effect). However, the jury appears to be out on ECT as well, so it may not even be worth it.
I actually looked at that exact website you linked to about ketamine. I’m all for it! However, all those studies are also across the country from me. Although you could say that quitting my job and staying across the country for 2 months is worth the chance of treating my depression, I’m not certain that the possible benefits outweigh the risk of being unemployed, and potentially for a long time, given the current labor market. After all, I could end up just in the control group and have no treatment at all, or the treatment could be ineffective.
Also, just for the sake of clarity, I was wrong: I’m actually not clinically depressed; I have dysthymia, which is a chronic low-grade depression (i.e. I can still function, go to work, seem normal, etc.). Maybe this is why none of my psychiatrists have recommended ECT, even when I was suicidal? Idk.
I have a draft post that uses some economics as an example, but I’m not sure I got it right. If you know economics, can I send you the draft?
A few months ago, I came across this discussion about RationalPoker.com. I found it interesting and I stored it away in the back of my mind for a time when I had the money and time to play poker. Last week, I made the jump and deposited $200 into an online poker account. I have been studying up on good online play at (2 + 2)[http://forumserver.twoplustwo.com/], Pokerology and Play Winning Poker. Sadly Rational Poker has only a few post before going dormant.
From what I gather, playing poker is an exercise in individual instrumental rationality, and I thought some LWers would be players themselves, even semi-professional or professional players, or at least some might be interested in learning. Is anyone here a winning poker player, and if you are, how did you become good?