I’ve been lurking for a very long time, more than six years I think. Lots of sentences come to mind when I think, “Why haven’t I posted anything before?” Here are a few:
“LessWrong forum is just like any other forum” Well my sample size is low, but… I don’t care what you tell yourselves; what I observe is people constantly talking past each other. And if, in reading an article or comment, a possible comment comes to mind; hot-on-its-heels is the thought that there isn’t really any point in posting it, because the replies would all be of a type drawn from some annoying archetypes, like (A) I agree! But I have nothing to add. (B) You’re wrong. Let me Proceed to Misrepresent in some way. (The Misrepresentation is “guaranteed” to be unclear because it insists on starting on its own terms and not mine). And if I yet start a good-natured chain of comments, suddenly I find myself talking about the other person’s stuff and not the ideas that motivated my original comment. Which I probably wouldn’t have commented on for its own sake. And as soon as a comment has one reply, people stop thinking about it as an entity in its own right.
Don’t you dare dismiss these thoughts so quickly!
“It’s been done. Well, mostly.” Eliezer wrote so many general, good, posts—where do I even find a topic sufficiently different that I’m not, for the most part, retreading the ground? Posts by other people are just completely different: instead of the post having a constructive logic of its own, references are given. The type of evidence given by the references is of a totally different sort. Instead of being about rationality, such posts seem to be about “random topic 101″? Ok this isn’t very clear.
So few comments are intelligible. What are these people even arguing about? It’s not clear! How can one comment on this in good faith? Note that the posts you observe are, therefore, more likely to come from people who are not stopped from posting by having unclear or absurd interpretations of parent comments.
Lesswrong seems like it should be a good place. The sequences are a fantastic foundation, but there’s so little else! I’m subjectively pretty sure that E.Y. thinks Lesswrong is failing. Of course one may hope “for the frail shrub to start growing”.
In the hope that some people read (and continue to read) this charitably, let me continue. Consider the following in the welcome thread itself.
″ We’d love to know who you are, what you’re doing, what you value, how you came to identify as an aspiring rationalist or how you found us. ”
Err, what? Why on Earth should I immediately give you my life story? Or even, Do these questions make sense? “What I value?” On a rationalist forum you are expecting the English language to contain short sentences that encode a good meaning of “value”? Yeah, yeah, whatever. Taking a breath, --- ---. . . Why do you even want to know?
How about just, “you can post who you are, what you’re doing … found us here, if you want.”
I should not have to post such things to get basic community acceptance. And you have no right to be so interested in someone who has, as yet, not contributed to lesswrong at all! Surely THAT is the measure of worth within the Lesswrong site. Questions about things not related to Lesswrong should be “expensive” to ask—at least requiring a personal comment.
Oh, I think I initially found lesswrong via Bostrom via looking at the self-sampling hypothesis? It’s kind of hazy.
I don’t think anyone is suggesting that you “have to post such things to get basic community acceptance”. Only that a thread in which newcomers do so might be a welcoming place, especially for newcomers who for whatever reason find LW intimidating. It seems clear that that isn’t your situation; you are probably not the target audience for the proposal.
(Which doesn’t mean you wouldn’t be welcome in a welcome/newbie thread. Just that you probably wouldn’t get as much out of it as some other people.)
Yeah, hi :-) . Well, technically I didn’t say that anyone WAS suggesting it. I like your interpretation much better of course! And there could be people who respond well to the “we’d love to know—” formulation. Apparently I don’t! I tried to give you a vague idea of why I felt that way at least.
Since I’ve got to offer something, try this paragraph:
It seems a little weird to expect a newcomer to adapt to lesswrong by having a special thread, where nothing really unique to lesswrong is mentioned. That other guy before me in the thread seems to have instinctively talked about only lesswrong-related things in his experience. But, perhaps you can only expect that to happen with people who ALREADY know something about lesswrong—proper lurkers rather than true newcomers? So, maybe there should be something like a newbie-thread-for-one-of-the-core-sequences, where the older members would try to adjust the newcomer as to how the words should be read—because we all know that there are people who read one of Eliezer’s posts and immediately proceed to misinterpret something badly without realising? And… that sounds very close to the “questions which are new and interesting to you but could be old and boring for the older members”...
You’ve just been treated to: me working out the kinks I felt in the welcome page. I guess it was already doing what I wanted, and I’m not adding anything really new. Weird.
You know, I actually do have a question. I’ve never felt like I really understand what a utility function is supposed to be doing. Wait, that’s not right. More like, I’ve never felt like I understand how much of the utility function formalism is inevitable, versus how much is a hypothetical model. There are days when I feel it’s being defined implicitly in a way that means you can always talk about things using it, and there are days when I’m worried that it might not be the right definition to use. Does that make sense? Can anyone help me with this (badly-phrased) question?
It seems a little weird to expect a newcomer to adapt to lesswrong by having a special thread, where nothing really unique to lesswrong is mentioned.
I don’t think the point of the special thread is so much to teach people LW-specific things to enable them to participate, as to overcome shyness and intimidation and the like. That’s a problem people have everywhere, and doesn’t call for anything LW-specific (except in so far as the people here are unusual, which they might be). In some cases, a newcomer’s shyness and intimidation might be because they feel they don’t know or understand something, and they could ask about that—but, again, similar things could happen anywhere and any LW-specific-ness would come out of the specific questions people ask.
I’ve never felt like I understand how much of the utility function formalism is inevitable, versus how much is a hypothetical model.
So there’s a theorem that says that under certain circumstances an agent either (behaves exactly as if it) has a utility function and tries to maximize its expected value, or is vulnerable to certain kinds of undesirable outcome. So, e.g., if you’re trying to build an AI that you trust with superhuman power then you might want it to have a utility function.
But humans certainly don’t behave exactly as if we have utility functions, at least not sensible ones. It’s often easy to get someone to switch between preferring A over B and preferring B over A just by changing the words you use to describe A and B, for instance; and when trying to make difficult decisions, most people don’t do anything much like an expected-utility calculation.
And the vNM theorem, unsurprisingly, makes a bunch of technical assumptions that don’t necessarily apply to real people in the real world—and, further, to get from “if you don’t do X you will run into trouble Y” to “you should do X” you need to know that the adverse consequences of doing X aren’t worse than Y, which for resource-limited agents like us they might be. (Indeed, doing X might be simply impossible for us, or for whatever other agents we’re considering; e.g., if you care about the welfare of people 100 years from now, evaluating your “utility function”’s expectation would require making detailed probabilistic predictions about all the ways the world could be 100 years from now; good luck with that!)
It’s fairly common in these parts to talk as if people have utility functions—to say “my utility function has a large term for such-and-such”, etc. I take that to be shorthand for something more like “in some circumstances, understood from context, my behaviour crudely resembles that of an agent whose utility function has such-and-such properties”. Anyone talking about humans’ utility functions and expecting much more precision than that is probably fooling themselves.
Does that help at all, or am I just telling you things you already understand well?
Thanks for that explanation of utility functions, gjm, and thanks to protostar for asking the question. I’ve been struggling with the same issue, and nothing I’ve read seems to hold up when I try to apply it to a concrete use case.
What do you think about trying to build a utility TABLE for major, point-in-time life decisions, though, like buying a home or choosing a spouse?
P.S. I’d upvote your response to protostar, but I can’t seem to make that happen.
You need, I think, 10 points before you’re allowed to upvote or downvote anything. The intention is to make it a little harder to make fake accounts and upvote your own posts or downvote your enemies’. (Unfortunately it hasn’t made it hard enough to stop the disgruntled user who’s downvoting almost everything I—and a few other people—post, sometimes multiple times with lots of sockpuppet accounts.)
I’m not sure exactly what you mean by a utility table, but here is an example of one LWer doing something a bit like a net utility calculation to decide between two houses.
One piece of advice I’ve seen in several places is that when you have a big and difficult decision to make you should write down your leading options and the major advantages and disadvantages of each. How much further gain (if any) there is from quantifying them and actually doing calculations, I don’t know; my gut says probably usually not much, but my gut is often wrong.
The words “utility function” here are usually used in two quite different meanings.
In meaning 1 they are specific and refer to the VNM utility function which gjm talked about. However, as he correctly mentioned, “humans certainly don’t behave exactly as if we have utility functions, at least not sensible ones”. Note: specifically VNM utility functions.
In meaning 2 these words are non-specific and refer to an abstract concept of that what you would want and would choose if given the opportunity. For example if you want to talk about incentives but do not care about what precisely would incentivise an agent, you might abstract his actual desires out of the picture and talk about his utility in general. This utility is not the VNM utility. It’s just a convenient placeholder, a variable that we (usually) do not need the value of.
nothing I’ve read seems to hold up when I try to apply it to a concrete use case.
That’s because humans don’t have VNM utility functions and even if they did, you wouldn’t be able to calculate your own on the fly.
trying to build a utility TABLE for major, point-in-time life decisions
In the home purchase decision use case, I’m currently working with a “utility table” where the columns list serious home purchase candidates, and one column is reserved for my current home as a baseline. (The theory there is I know what my current home feels like, so I can map abstract attribute scores to a tangible example. Also, if a candidate new home fails to score better overall than my current home, there’s no sense in moving.)
The rows in the utility table list various functions or services that a home with its land might perform and various attributes related to those functions. Examples:
Number of bedrooms (related to family size & home uses like office, library)
Floor plan features (count number of desirable features from list—walk-in closet in MBR, indoor laundry, etc)
Interior steps (related to wheelchair friendliness)
Exterior steps (ditto)
Roof shape (related to leak risk/water damage & mold repair costs, also roof replacement frequency)
Exterior material (related to termite risk/repair costs, earthquake risk)
Size of yard (related to maintenance costs, uses such as entertaining, horses, home business)
Slope, elevation, sun exposure, wind exposure factors
Social location factors (distance to work, distance to favorite grocery store, walkable to public transit, etc)
Wildfire risk zone
FEMA flood risk zone
Number of access/evacuation routes
Price
Square footage
Cost per square foot
Total housing cost per month
Etc
Each of these gets a set of possible values defined, and the possible values are then ranked from 1 to n, where 1 is less desirable and n is more desirable. A rank of 0 is assigned to outright aversive conditions such as being located in a high wildfire risk zone or in a FEMA 100-year flood zone or a multi-story home (your rankings will vary). I then normalize the rank scores for each row to a value between zero and 1.
One squirrelly feature of my system is that some of the row score ranks are not predefined but dynamic. By that I mean that the actual base value before scoring—such as the price of the house—is rank ordered across all the columns rather than placed in a standard price interval that is given a rank relative to other price intervals. This means that the ranks assigned to each of the home candidates can change when a new home is added to the table. (And yes, I have to renormalize when this happens, because n increments by 1.)
Then I sum up the scores for each candidate, plus my baseline existing home, and see which one wins.
It all sounds logical enough, but unfortunately it’s not as easy as it sounds. It’s hard to optimize across all possible home choices in practice, because candidate homes have a basically random arrival rate on the market and they can’t all be compared at once. You can’t even wait to make pairwise comparisons, because—at least in Southern California—any reasonably affordable, acceptable home is likely to be snapped up for cash by an investor or flipper within days of coming on the market unless you make an offer first, right then.
Another problem with the serial arrival rate of candidate homes is that you can get fixated on the first house you see on Zillow or the first house your real estate agent trots out for you to visit. I’ve got hacks outside of the utility table (as I’m calling it) for getting around that tendency, but I want the utility table to work as tool for preventing fixation as well.
Just trying to create a utility table has helped me tremendously with figuring out what I want and don’t want in a home. That exercise, when combined with looking at real homes, also taught me that most things I thought were absolutes on paper actually were not so when I got to looking at real houses and real yards and real neighborhoods and experiencing what the tradeoffs felt like to occupy. The combination of experience and analysis has been a good tool for updating my perceptions as well as my utility table. Which is why I think this might be a useful tool: it gives me method of recording past experience for use in making rapid but accurate judgments on subsequent, serially presented, one-time-opportunities to make a home purchase.
But I’ve also had a lot of trouble making the scoring work. First I tried to weight each row by how important I considered it to be, but that made things too easy to cheat on.
Then I tried to weight rows by probability of experiencing the lifestyle function or cost or risk involved. For example, I sleep every night and don’t travel much, so a functional bedroom matters basically 99.9% of the time. The risk of wildfire, on the other hand, is lower, but how do I calculate it? This county averages about 8 major fires a year—but what base to I use to convert that to a percentage? Divide 365 days per year into 8 fire events per year, or into 8 times the average days duration of the fire? Or should I count the number of homes burned per year as a percentage of all homes in the county? These latter statistics are not easily obtained, unlike the count of fires per year. Plus, I plan to live here 30 years, and here has more fires than elsewhere, while sleeping probability is unaffected by location. How do I account for that? And then there’s the fact that only one fire would be catastrophic and possibly life-ending, while one night of bad sleep can be shrugged off. In the end, I couldn’t find any combination of probabilities and costs that was commensurable across all the rows and not subject to cheating.
I also tried giving negative numbers to aversive situations like high wildfire risk and FEMA flood zones, but all that did was make an always crummy but safe house and an always spectacular but occasionally life-threatening house look exactly the same. This just didn’t feel right to me.
So I ended up just taking the total unweighted but normalized rank scores for each house, and supplementing that with a separate count of the negatives. That gives me two ways to score the candidate homes, and if the same house wins on both measures, I consider that an indicator of reliability.
By keeping score on all the homes I seriously consider making an offer on, I think I can make a pretty good serial judgment on the current candidate even if I can’t optimize concurrently. Or so I believe.
Is this a reasonable approach? I doubt there’s anything Bayesian whatsoever about it, but I really don’t care as long as the method is reasonable and doesn’t have any obvious self-deception in it.
What do you mean “cheat”? Presumably you want to buy a house you like, not just the one that checks the most boxes in a spreadsheet.
So I ended up just taking the total unweighted but normalized rank scores for each house, and supplementing that with a separate count of the negatives
That doesn’t look like a reasonable procedure to me. So whether a house has exterior steps gets to be as important as the price? One of the reasons such utility tables have limited utility is precisely the weights. They are hard to specify but naive approaches like making everything equal-weighted don’t look to lead to good outcomes.
Effectively you need to figure out the trade-offs involved (e.g. “am I willing to pay $20K more for a bigger yard? How about $40K?”) and equal weights for ranks are rather unhelpful.
I agree that making a list of things you need and value in a house is a very useful exercise. But you can’t get to the level of completeness needed to make the whole thing work the way you want it to work. You mention updating this table on the basis of your perceptions and experience, but if your ranks are equal-weighted anyway, what do you update?
With respect to the houses serially appearing before you, a simplified abstraction of this problem has an optimal solution.
Thanks much for the link to the Secretary Problem solution. That’s will serve perfectly. Even if I don’t know the total number of houses that will be candidates for serious consideration, I do know there’s an average, which is (IIRC) six houses visited before a purchase.
As for cheating … what I mean by that is deluding myself about some aspects of the property I’m looking at so that I believe “this is the one” and make an offer just to stop the emotional turmoil of changing homes and spending a zillion dollars that I don’t happen to possess. “Home sweet home” and “escape the evil debt trap” are memes at war in my head, and I will do things like hallucinate room dimensions that accommodate my furniture rather than admit to myself that an otherwise workable floor plan in a newly gutted and beautifully renovated yet affordable home is too dang small and located in a declining neighborhood. I take a tape measure and grid paper with me to balk the room size cheat. But I also refer to the table, which requires me to check for FEMA flood zone location. This particular candidate home was in a FEMA 100-year flood zone, and the then-undeveloped site had in fact been flooded in 1952. That fact was enough to snap me out of my delusion. At that point the condition of the neighboring homes became salient.
The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.
On weighting my functional criteria based on dollars, the real estate market has worked out a marvelous short circuit for rationality. Houses are no longer assessed for value based on an individual home’s actual functional specifications. The quantity of land matters (so yard size matters to price). Otherwise, overwhelmingly, residential properties are valued for sale and for mortgages based on recent sales of “comparable” homes. “Comparable” means “the same square footage & same number of bedrooms and baths within half of mile of your candidate home.” The two homes can otherwise be completely dissimilar, but will nevertheless be considered “comparable”. No amount of improvements to the house or yard will change the most recent sale price of the other homes in the neighborhood. What this means is that sales prices are just for generic shelter plus the land, where the land is most of the value and neighborhood location is most of the land value. So the price of the home you’re looking at is not really very closely tied to anything you might value about the home. This makes it very difficult to come up with a reasonable market price for, say, an indoor laundry versus laundry facilities in the garage. It’s certainly beyond my meager capacity to calibrate the value of home amenities based on dollars.
I’m told it wasn’t this way in the 1950s, but given the history of land scams in the U.S., which go all the way back to colonial land scams in Virginia, I have my doubts that prices for real estate were ever rational.
But I’ll try to find something for weights. Back to the drawing board. And thanks for your help.
The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.
In some areas that’s not terrible. The thing is, if you’re building a bridge you want that bridge to not fall down and that will or will not happen regardless of your illusions, delusions, and sense of accomplishment. However if you’re picking something to make you happy, this no longer applies. Now your perception matters.
Let’s say you are looking at a house that checks off all the checkboxes, but on a instinctual, irrational level you just hate it. Maybe there’s something about the proportions, maybe there’s some barely noticeable smell, maybe there’s nothing at all you can articulate, but your gut is very clearly telling you NO.
Do not buy this house.
The reverse (your gut is telling you YES) is iffier for reasons you’re well aware of. However my point is still valid—when doing or buying things (at least partially) for the experience they will give you, you need to accommodate your perceptions and self-delusions, if only because they play a role in keeping you happy.
Houses are no longer assessed for value based on an individual home’s actual functional specifications.
Um, not sure about that. See, you can assess anything you want but you still need a buyer. You still need someone to come and say “This is what I will part with all my savings and get into debt for”. No one obligates you to buy a house which is priced “fairly” on comparables but does not satisfy you.
Markets are generally quite good at sorting these things out and the real estate market is not sufficiently screwed up to break this, I think.
Beware! The optimal solution depends a lot on the exact problem statement. The goal in the SP is to maximize the probability that you end up with the best available option, and it assumes you’re perfectly indifferent between that and all other possible outcomes.
That Wikipedia page discusses one variant, where each candidate has a score chosen uniformly at random between 0 and 1, and all you learn about each candidate is whether it’s the best so far. Your goal is to maximize your score. With that modification, the optimal strategy turns out to be to switch from “observe” to “accept next best-so-far” much sooner than with the original SP—after about sqrt(n) candidates.
Your actual situation when buying a house is quite different from either of these. You might want to hack up a little computer program that simulates a toy version of the house-buying process, and experiment with strategies.
And as soon as a comment has one reply, people stop thinking about it as an entity in its own right.
Yeah, I know the feeling. Or when a comment or two below an article drag the whole discussion in a completely different direction. But as you say, it’s “just like any other forum”. How could this be prevented? Replying before reading other comments has a high risk of repeating what someone else said. Having the discipline to read the original comment again and try to see it with fresh eyes is difficult.
Instead of being about rationality, such posts seem to be about “random topic 101”?
There are topics that Eliezer described pretty well. Not saying that useful stuff cannot be added, but the lowest hanging fruit has been probably already picked. But there are also areas that Eliezer did not describe althogh he considered them important. Quoting from Go Forth and Create the Art!:
defeating akrasia and coordinating groups (...) And then there’s training, teaching, verification, and becoming a proper experimental science based on that. And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...
Some of these things were addressed. There are about dozen articles on procrastination, and have the Less Wrong Study Hall. CFAR is working on the rationality curriculum, although I would like to see much more visible output.
I think we are quite weak at developing the introductory literature, and public relations in general. I don’t feel we have much to offer to a mildly interested outsider to make them more interested. A link to Sequences e-book and… what is the next step? Telling them to come here and procrastinate reading the debates we have here? I don’t know myself what is the next step other than “invent your own project, possibly with cooperation of other people you found through LW”.
I feel that a fully mature rationalist community would offer the newbie rationalists some more guidance. So here is the opportunity for those who want to see the community grow: to find out what kind of guidance it would be, and to provide it. Are we going to find smart people and teach them math? Teach existing scientists how to understand and use p-values properly? Or organize Procrastinators Anonymous meetups? Make a website debunking frequent irrational claims? Support startups in exchange for a pledge to donate to MIRI?
Why on Earth should I immediately give you my life story?
I’m pretty sure that’s supposed to be a conversational starter. Feel free to keep any secrets you want.
I’ve been lurking for a very long time, more than six years I think. Lots of sentences come to mind when I think, “Why haven’t I posted anything before?” Here are a few:
“LessWrong forum is just like any other forum” Well my sample size is low, but… I don’t care what you tell yourselves; what I observe is people constantly talking past each other. And if, in reading an article or comment, a possible comment comes to mind; hot-on-its-heels is the thought that there isn’t really any point in posting it, because the replies would all be of a type drawn from some annoying archetypes, like (A) I agree! But I have nothing to add. (B) You’re wrong. Let me Proceed to Misrepresent in some way. (The Misrepresentation is “guaranteed” to be unclear because it insists on starting on its own terms and not mine). And if I yet start a good-natured chain of comments, suddenly I find myself talking about the other person’s stuff and not the ideas that motivated my original comment. Which I probably wouldn’t have commented on for its own sake. And as soon as a comment has one reply, people stop thinking about it as an entity in its own right. Don’t you dare dismiss these thoughts so quickly!
“It’s been done. Well, mostly.” Eliezer wrote so many general, good, posts—where do I even find a topic sufficiently different that I’m not, for the most part, retreading the ground? Posts by other people are just completely different: instead of the post having a constructive logic of its own, references are given. The type of evidence given by the references is of a totally different sort. Instead of being about rationality, such posts seem to be about “random topic 101″? Ok this isn’t very clear.
So few comments are intelligible. What are these people even arguing about? It’s not clear! How can one comment on this in good faith? Note that the posts you observe are, therefore, more likely to come from people who are not stopped from posting by having unclear or absurd interpretations of parent comments.
Lesswrong seems like it should be a good place. The sequences are a fantastic foundation, but there’s so little else! I’m subjectively pretty sure that E.Y. thinks Lesswrong is failing. Of course one may hope “for the frail shrub to start growing”.
In the hope that some people read (and continue to read) this charitably, let me continue. Consider the following in the welcome thread itself.
″ We’d love to know who you are, what you’re doing, what you value, how you came to identify as an aspiring rationalist or how you found us. ”
Err, what? Why on Earth should I immediately give you my life story? Or even, Do these questions make sense? “What I value?” On a rationalist forum you are expecting the English language to contain short sentences that encode a good meaning of “value”? Yeah, yeah, whatever. Taking a breath, --- ---. . . Why do you even want to know?
How about just, “you can post who you are, what you’re doing … found us here, if you want.”
I should not have to post such things to get basic community acceptance. And you have no right to be so interested in someone who has, as yet, not contributed to lesswrong at all! Surely THAT is the measure of worth within the Lesswrong site. Questions about things not related to Lesswrong should be “expensive” to ask—at least requiring a personal comment.
Oh, I think I initially found lesswrong via Bostrom via looking at the self-sampling hypothesis? It’s kind of hazy.
I don’t think anyone is suggesting that you “have to post such things to get basic community acceptance”. Only that a thread in which newcomers do so might be a welcoming place, especially for newcomers who for whatever reason find LW intimidating. It seems clear that that isn’t your situation; you are probably not the target audience for the proposal.
(Which doesn’t mean you wouldn’t be welcome in a welcome/newbie thread. Just that you probably wouldn’t get as much out of it as some other people.)
And, er, welcome to Less Wrong :-).
Yeah, hi :-) . Well, technically I didn’t say that anyone WAS suggesting it. I like your interpretation much better of course! And there could be people who respond well to the “we’d love to know—” formulation. Apparently I don’t! I tried to give you a vague idea of why I felt that way at least.
Since I’ve got to offer something, try this paragraph:
It seems a little weird to expect a newcomer to adapt to lesswrong by having a special thread, where nothing really unique to lesswrong is mentioned. That other guy before me in the thread seems to have instinctively talked about only lesswrong-related things in his experience. But, perhaps you can only expect that to happen with people who ALREADY know something about lesswrong—proper lurkers rather than true newcomers? So, maybe there should be something like a newbie-thread-for-one-of-the-core-sequences, where the older members would try to adjust the newcomer as to how the words should be read—because we all know that there are people who read one of Eliezer’s posts and immediately proceed to misinterpret something badly without realising? And… that sounds very close to the “questions which are new and interesting to you but could be old and boring for the older members”...
You’ve just been treated to: me working out the kinks I felt in the welcome page. I guess it was already doing what I wanted, and I’m not adding anything really new. Weird.
You know, I actually do have a question. I’ve never felt like I really understand what a utility function is supposed to be doing. Wait, that’s not right. More like, I’ve never felt like I understand how much of the utility function formalism is inevitable, versus how much is a hypothetical model. There are days when I feel it’s being defined implicitly in a way that means you can always talk about things using it, and there are days when I’m worried that it might not be the right definition to use. Does that make sense? Can anyone help me with this (badly-phrased) question?
I don’t think the point of the special thread is so much to teach people LW-specific things to enable them to participate, as to overcome shyness and intimidation and the like. That’s a problem people have everywhere, and doesn’t call for anything LW-specific (except in so far as the people here are unusual, which they might be). In some cases, a newcomer’s shyness and intimidation might be because they feel they don’t know or understand something, and they could ask about that—but, again, similar things could happen anywhere and any LW-specific-ness would come out of the specific questions people ask.
So there’s a theorem that says that under certain circumstances an agent either (behaves exactly as if it) has a utility function and tries to maximize its expected value, or is vulnerable to certain kinds of undesirable outcome. So, e.g., if you’re trying to build an AI that you trust with superhuman power then you might want it to have a utility function.
But humans certainly don’t behave exactly as if we have utility functions, at least not sensible ones. It’s often easy to get someone to switch between preferring A over B and preferring B over A just by changing the words you use to describe A and B, for instance; and when trying to make difficult decisions, most people don’t do anything much like an expected-utility calculation.
And the vNM theorem, unsurprisingly, makes a bunch of technical assumptions that don’t necessarily apply to real people in the real world—and, further, to get from “if you don’t do X you will run into trouble Y” to “you should do X” you need to know that the adverse consequences of doing X aren’t worse than Y, which for resource-limited agents like us they might be. (Indeed, doing X might be simply impossible for us, or for whatever other agents we’re considering; e.g., if you care about the welfare of people 100 years from now, evaluating your “utility function”’s expectation would require making detailed probabilistic predictions about all the ways the world could be 100 years from now; good luck with that!)
It’s fairly common in these parts to talk as if people have utility functions—to say “my utility function has a large term for such-and-such”, etc. I take that to be shorthand for something more like “in some circumstances, understood from context, my behaviour crudely resembles that of an agent whose utility function has such-and-such properties”. Anyone talking about humans’ utility functions and expecting much more precision than that is probably fooling themselves.
Does that help at all, or am I just telling you things you already understand well?
Thanks for that explanation of utility functions, gjm, and thanks to protostar for asking the question. I’ve been struggling with the same issue, and nothing I’ve read seems to hold up when I try to apply it to a concrete use case.
What do you think about trying to build a utility TABLE for major, point-in-time life decisions, though, like buying a home or choosing a spouse?
P.S. I’d upvote your response to protostar, but I can’t seem to make that happen.
You need, I think, 10 points before you’re allowed to upvote or downvote anything. The intention is to make it a little harder to make fake accounts and upvote your own posts or downvote your enemies’. (Unfortunately it hasn’t made it hard enough to stop the disgruntled user who’s downvoting almost everything I—and a few other people—post, sometimes multiple times with lots of sockpuppet accounts.)
I’m not sure exactly what you mean by a utility table, but here is an example of one LWer doing something a bit like a net utility calculation to decide between two houses.
One piece of advice I’ve seen in several places is that when you have a big and difficult decision to make you should write down your leading options and the major advantages and disadvantages of each. How much further gain (if any) there is from quantifying them and actually doing calculations, I don’t know; my gut says probably usually not much, but my gut is often wrong.
The words “utility function” here are usually used in two quite different meanings.
In meaning 1 they are specific and refer to the VNM utility function which gjm talked about. However, as he correctly mentioned, “humans certainly don’t behave exactly as if we have utility functions, at least not sensible ones”. Note: specifically VNM utility functions.
In meaning 2 these words are non-specific and refer to an abstract concept of that what you would want and would choose if given the opportunity. For example if you want to talk about incentives but do not care about what precisely would incentivise an agent, you might abstract his actual desires out of the picture and talk about his utility in general. This utility is not the VNM utility. It’s just a convenient placeholder, a variable that we (usually) do not need the value of.
That’s because humans don’t have VNM utility functions and even if they did, you wouldn’t be able to calculate your own on the fly.
What would it look like?
In the home purchase decision use case, I’m currently working with a “utility table” where the columns list serious home purchase candidates, and one column is reserved for my current home as a baseline. (The theory there is I know what my current home feels like, so I can map abstract attribute scores to a tangible example. Also, if a candidate new home fails to score better overall than my current home, there’s no sense in moving.)
The rows in the utility table list various functions or services that a home with its land might perform and various attributes related to those functions. Examples:
Number of bedrooms (related to family size & home uses like office, library)
Floor plan features (count number of desirable features from list—walk-in closet in MBR, indoor laundry, etc)
Interior steps (related to wheelchair friendliness)
Exterior steps (ditto)
Roof shape (related to leak risk/water damage & mold repair costs, also roof replacement frequency)
Exterior material (related to termite risk/repair costs, earthquake risk)
Size of yard (related to maintenance costs, uses such as entertaining, horses, home business)
Slope, elevation, sun exposure, wind exposure factors
Social location factors (distance to work, distance to favorite grocery store, walkable to public transit, etc)
Wildfire risk zone
FEMA flood risk zone
Number of access/evacuation routes
Price
Square footage
Cost per square foot
Total housing cost per month
Etc
Each of these gets a set of possible values defined, and the possible values are then ranked from 1 to n, where 1 is less desirable and n is more desirable. A rank of 0 is assigned to outright aversive conditions such as being located in a high wildfire risk zone or in a FEMA 100-year flood zone or a multi-story home (your rankings will vary). I then normalize the rank scores for each row to a value between zero and 1.
One squirrelly feature of my system is that some of the row score ranks are not predefined but dynamic. By that I mean that the actual base value before scoring—such as the price of the house—is rank ordered across all the columns rather than placed in a standard price interval that is given a rank relative to other price intervals. This means that the ranks assigned to each of the home candidates can change when a new home is added to the table. (And yes, I have to renormalize when this happens, because n increments by 1.)
Then I sum up the scores for each candidate, plus my baseline existing home, and see which one wins.
It all sounds logical enough, but unfortunately it’s not as easy as it sounds. It’s hard to optimize across all possible home choices in practice, because candidate homes have a basically random arrival rate on the market and they can’t all be compared at once. You can’t even wait to make pairwise comparisons, because—at least in Southern California—any reasonably affordable, acceptable home is likely to be snapped up for cash by an investor or flipper within days of coming on the market unless you make an offer first, right then.
Another problem with the serial arrival rate of candidate homes is that you can get fixated on the first house you see on Zillow or the first house your real estate agent trots out for you to visit. I’ve got hacks outside of the utility table (as I’m calling it) for getting around that tendency, but I want the utility table to work as tool for preventing fixation as well.
Just trying to create a utility table has helped me tremendously with figuring out what I want and don’t want in a home. That exercise, when combined with looking at real homes, also taught me that most things I thought were absolutes on paper actually were not so when I got to looking at real houses and real yards and real neighborhoods and experiencing what the tradeoffs felt like to occupy. The combination of experience and analysis has been a good tool for updating my perceptions as well as my utility table. Which is why I think this might be a useful tool: it gives me method of recording past experience for use in making rapid but accurate judgments on subsequent, serially presented, one-time-opportunities to make a home purchase.
But I’ve also had a lot of trouble making the scoring work. First I tried to weight each row by how important I considered it to be, but that made things too easy to cheat on.
Then I tried to weight rows by probability of experiencing the lifestyle function or cost or risk involved. For example, I sleep every night and don’t travel much, so a functional bedroom matters basically 99.9% of the time. The risk of wildfire, on the other hand, is lower, but how do I calculate it? This county averages about 8 major fires a year—but what base to I use to convert that to a percentage? Divide 365 days per year into 8 fire events per year, or into 8 times the average days duration of the fire? Or should I count the number of homes burned per year as a percentage of all homes in the county? These latter statistics are not easily obtained, unlike the count of fires per year. Plus, I plan to live here 30 years, and here has more fires than elsewhere, while sleeping probability is unaffected by location. How do I account for that? And then there’s the fact that only one fire would be catastrophic and possibly life-ending, while one night of bad sleep can be shrugged off. In the end, I couldn’t find any combination of probabilities and costs that was commensurable across all the rows and not subject to cheating.
I also tried giving negative numbers to aversive situations like high wildfire risk and FEMA flood zones, but all that did was make an always crummy but safe house and an always spectacular but occasionally life-threatening house look exactly the same. This just didn’t feel right to me.
So I ended up just taking the total unweighted but normalized rank scores for each house, and supplementing that with a separate count of the negatives. That gives me two ways to score the candidate homes, and if the same house wins on both measures, I consider that an indicator of reliability.
By keeping score on all the homes I seriously consider making an offer on, I think I can make a pretty good serial judgment on the current candidate even if I can’t optimize concurrently. Or so I believe.
Is this a reasonable approach? I doubt there’s anything Bayesian whatsoever about it, but I really don’t care as long as the method is reasonable and doesn’t have any obvious self-deception in it.
What do you mean “cheat”? Presumably you want to buy a house you like, not just the one that checks the most boxes in a spreadsheet.
That doesn’t look like a reasonable procedure to me. So whether a house has exterior steps gets to be as important as the price? One of the reasons such utility tables have limited utility is precisely the weights. They are hard to specify but naive approaches like making everything equal-weighted don’t look to lead to good outcomes.
Effectively you need to figure out the trade-offs involved (e.g. “am I willing to pay $20K more for a bigger yard? How about $40K?”) and equal weights for ranks are rather unhelpful.
I agree that making a list of things you need and value in a house is a very useful exercise. But you can’t get to the level of completeness needed to make the whole thing work the way you want it to work. You mention updating this table on the basis of your perceptions and experience, but if your ranks are equal-weighted anyway, what do you update?
With respect to the houses serially appearing before you, a simplified abstraction of this problem has an optimal solution.
Thanks much for the link to the Secretary Problem solution. That’s will serve perfectly. Even if I don’t know the total number of houses that will be candidates for serious consideration, I do know there’s an average, which is (IIRC) six houses visited before a purchase.
As for cheating … what I mean by that is deluding myself about some aspects of the property I’m looking at so that I believe “this is the one” and make an offer just to stop the emotional turmoil of changing homes and spending a zillion dollars that I don’t happen to possess. “Home sweet home” and “escape the evil debt trap” are memes at war in my head, and I will do things like hallucinate room dimensions that accommodate my furniture rather than admit to myself that an otherwise workable floor plan in a newly gutted and beautifully renovated yet affordable home is too dang small and located in a declining neighborhood. I take a tape measure and grid paper with me to balk the room size cheat. But I also refer to the table, which requires me to check for FEMA flood zone location. This particular candidate home was in a FEMA 100-year flood zone, and the then-undeveloped site had in fact been flooded in 1952. That fact was enough to snap me out of my delusion. At that point the condition of the neighboring homes became salient.
The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.
On weighting my functional criteria based on dollars, the real estate market has worked out a marvelous short circuit for rationality. Houses are no longer assessed for value based on an individual home’s actual functional specifications. The quantity of land matters (so yard size matters to price). Otherwise, overwhelmingly, residential properties are valued for sale and for mortgages based on recent sales of “comparable” homes. “Comparable” means “the same square footage & same number of bedrooms and baths within half of mile of your candidate home.” The two homes can otherwise be completely dissimilar, but will nevertheless be considered “comparable”. No amount of improvements to the house or yard will change the most recent sale price of the other homes in the neighborhood. What this means is that sales prices are just for generic shelter plus the land, where the land is most of the value and neighborhood location is most of the land value. So the price of the home you’re looking at is not really very closely tied to anything you might value about the home. This makes it very difficult to come up with a reasonable market price for, say, an indoor laundry versus laundry facilities in the garage. It’s certainly beyond my meager capacity to calibrate the value of home amenities based on dollars.
I’m told it wasn’t this way in the 1950s, but given the history of land scams in the U.S., which go all the way back to colonial land scams in Virginia, I have my doubts that prices for real estate were ever rational.
But I’ll try to find something for weights. Back to the drawing board. And thanks for your help.
In some areas that’s not terrible. The thing is, if you’re building a bridge you want that bridge to not fall down and that will or will not happen regardless of your illusions, delusions, and sense of accomplishment. However if you’re picking something to make you happy, this no longer applies. Now your perception matters.
Let’s say you are looking at a house that checks off all the checkboxes, but on a instinctual, irrational level you just hate it. Maybe there’s something about the proportions, maybe there’s some barely noticeable smell, maybe there’s nothing at all you can articulate, but your gut is very clearly telling you NO.
Do not buy this house.
The reverse (your gut is telling you YES) is iffier for reasons you’re well aware of. However my point is still valid—when doing or buying things (at least partially) for the experience they will give you, you need to accommodate your perceptions and self-delusions, if only because they play a role in keeping you happy.
Um, not sure about that. See, you can assess anything you want but you still need a buyer. You still need someone to come and say “This is what I will part with all my savings and get into debt for”. No one obligates you to buy a house which is priced “fairly” on comparables but does not satisfy you.
Markets are generally quite good at sorting these things out and the real estate market is not sufficiently screwed up to break this, I think.
Beware! The optimal solution depends a lot on the exact problem statement. The goal in the SP is to maximize the probability that you end up with the best available option, and it assumes you’re perfectly indifferent between that and all other possible outcomes.
That Wikipedia page discusses one variant, where each candidate has a score chosen uniformly at random between 0 and 1, and all you learn about each candidate is whether it’s the best so far. Your goal is to maximize your score. With that modification, the optimal strategy turns out to be to switch from “observe” to “accept next best-so-far” much sooner than with the original SP—after about sqrt(n) candidates.
Your actual situation when buying a house is quite different from either of these. You might want to hack up a little computer program that simulates a toy version of the house-buying process, and experiment with strategies.
Yeah, I know the feeling. Or when a comment or two below an article drag the whole discussion in a completely different direction. But as you say, it’s “just like any other forum”. How could this be prevented? Replying before reading other comments has a high risk of repeating what someone else said. Having the discipline to read the original comment again and try to see it with fresh eyes is difficult.
There are topics that Eliezer described pretty well. Not saying that useful stuff cannot be added, but the lowest hanging fruit has been probably already picked. But there are also areas that Eliezer did not describe althogh he considered them important. Quoting from Go Forth and Create the Art!:
Some of these things were addressed. There are about dozen articles on procrastination, and have the Less Wrong Study Hall. CFAR is working on the rationality curriculum, although I would like to see much more visible output.
I think we are quite weak at developing the introductory literature, and public relations in general. I don’t feel we have much to offer to a mildly interested outsider to make them more interested. A link to Sequences e-book and… what is the next step? Telling them to come here and procrastinate reading the debates we have here? I don’t know myself what is the next step other than “invent your own project, possibly with cooperation of other people you found through LW”.
I feel that a fully mature rationalist community would offer the newbie rationalists some more guidance. So here is the opportunity for those who want to see the community grow: to find out what kind of guidance it would be, and to provide it. Are we going to find smart people and teach them math? Teach existing scientists how to understand and use p-values properly? Or organize Procrastinators Anonymous meetups? Make a website debunking frequent irrational claims? Support startups in exchange for a pledge to donate to MIRI?
I’m pretty sure that’s supposed to be a conversational starter. Feel free to keep any secrets you want.