I seriously doubt that most people who make up jokes or stereotypes truly have enough data on hand to reasonably support even a generalization of this nature.
asparisi
Groupthink is as powerful as ever. Why is that? I’ll tell you. It’s because the world is run by extraverts.
The problem with extraverts… is a lack of imagination.
pretty much everything that is organized is organized by extraverts, which in turn is their justification for ruling the world.
This seems to be largely an article about how we Greens are so much better than those Blues rather than offering much that is useful.
I don’t have the answer but would be extremely interested in knowing it.
(Sorry this comment isn’t more helpful. I am trying to get better at publicly acknowledging when I don’t know an answer to a useful question in the hopes that this will reduce the sting of it.)
A potential practical worry for this argument: it is unlikely that any such technology will grant just enough for one dose for each person and no more, ever. Most resources are better collected, refined, processed, and utilized when you have groups. Moreover, existential risks tend to increase as the population decreases: a species with only 10 members is more likely to die out than a species with 10 million, ceteris paribus. The pill might extend your life, but if you have an accident, you probably need other people around.
There might be some ideal number here, but offhand I have no way of calculating it. Might be 30 people, might be 30 billion. But it seems like risk issues alone would make you not want to be the only person: we’re social apes, after all. We get along better when there are others.
Where is the incentive for them to consider the public interest, save for insofar as it is the same as the company interest?
It sounds like you think there is a problem: that executives being ruthless is not necessarily beneficial for society as a whole. But I don’t think that’s the root problem. Even if you got rid of all of the ruthless executives and replaced them with competitive-yet-conscientious executives, the pressures that creates and nurtures ruthless executives would still be in place. There are ruthless executives because the environment favors them in many circumstances.
Edited. Thanks.
Your title asks a different question than your post: “useful” vs. being a “social virtue.”
Consider two companies: A and B. Each has the option to pursue some plan X, or its alternative Y. X is more ruthless than Y (X may involve laying off a large portion of their workforce, a misinformation campaign, or using aggressive and unethical sales tactics) but X also stands to be more profitable than Y.
If the decision of which plan to pursue falls to a ruthless individual in company A, company A will likely pursue X. If the decision falls to a “highly competitive, compassionate, with restrictive sense of fair play” individual in company B, B may perform Y instead. If B does not perform Y, it is likely because they noted the comparative advantage A would have, being likely to pursue X. In this case, it is still in B’s interest to act ruthlessly, making ruthlessness useful.
Now, is it a virtue? Well, for a particular company it is useful: it allows the pursuit of plans that would otherwise not be followed. Does the greater society benefit from it? Well, society gains whatever benefit is gained from business pursuing such plans, at the cost of whatever the costs of such plans are. But it is a useful enough character trait for one company’s executives that it grants a competitive advantage over other companies where that trait is absent. Thus, it is an advantage- and perhaps a virtue, I am not sure how that word cashes out here- for each company. Companies without ruthless executives may fail to act or fail to act quickly where a ruthless executive wouldn’t hesitate. So in situations where ruthless tactics allow one to win, ruthless individuals are an asset.
I’m not sure what more can be said on this, as I don’t have a good way of cashing out the word ‘social virtue’ here or what practical question you are asking.
You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don’t think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardware and when they attempt to rely heavily on the algorithms we do have it doesn’t always work out well for them. This seems more a statement about our current algorithms than the potential for such algorithms, however.
However, there is a lot of energy on various fronts to hinder organizations whose motivations are such that they lead to threats, and because these organizations are reliant on humans for hardware, only a small number of existential threats have been produced by such organizations. It can be argued that one of the best reasons to develop FAI is to undo these threats and to stop organizations from creating new threats of the like in the future. So I am not sure that it follows from your position that we should not be worried about the singularity.
Definitely. These are the sorts of things that would need to be evaluated if my very rough sketch were to be turned into an actual theory of values.
Well, effectiveness and desire are two different things.
That aside, you could be posting for desires that are non-status related and still desire status. Human beings are certainly capable of wanting more than one thing at a time. So even if this post was motivated by some non-status related desire, that fact would not, in and of itself, be evidence that you don’t desire status.
I’m not actually suggesting you update for you: you have a great deal more access to the information present inside your head than I do. I don’t even have an evidence-based argument: merely a parsimony based one, which is weak at best. I wouldn’t think of suggesting it unless I had some broader evidence that people who claim “I don’t desire status” really do. I have no such evidence.
The original post was why the argument “This post is evidence that I do not seek status” is unconvincing. I was merely pointing out that even if we use your version of E, it isn’t very good evidence for H. (Barring some data to change that, of course.)
Eh… but people like rock stars even though most people are NOT rock stars. People like people with really good looks even though most people don’t have good looks. And most people do have some sort of halo effect on wealthy people they actually meet, if not “the 1%” as a class.
I am not sure that a person who has no desire for status will write a post about how they have no desire for status that much more often than someone who does desire status. Particularly if this “desire” can be stronger or weaker. So it could be:
A- The person really doesn’t seek status and wants to express this fact for a non-status reason. B- The person does seek status but doesn’t self identify as someone who seeks status, and wants to express this fact for a non-status reason. C- The person does seek status but doesn’t self identify as someone who seeks status, and wants to express that they do not seek status on the gamble that being seen as a person who does not want status will heighten their status. D- The person does seek status and is gambling that being seen as a person who does not want status will heighten their status.
A has the advantage of simplicity, but its advantage is roughly on par with that of D. B is more complicated and C is more complicated, but not that much more as far as human ideas seem to run. And the set of all “status seekers” who would write such a post is {B, C, D}, and I’d say that the probability of that set is higher than the probability of A.
So all things being equal, I’d say that P(E|~H)>P(E|H). Which may still not lead to the right answer here. Now, if saying “I don’t seek status” was definitely a status losing behavior, I’d say that would shift things drastically as it would render {B, C, D} as improbable on more than bare simplicity. But I really don’t have a good evaluation for that, so I’d have to run on just the simplicity alone.
I upvoted it because the minimum we’d get without running a study would be anecdotal evidence.
I’m not sure that there is a close link between “status” and “behaving.” Most of the kids I knew who I would call “status-seeking” were not particularly well behaved: often the opposite. Most of the things you are talking about seem to fall into “good behavior” rather than “status.”
Additionally… well, we’d probably need to track a whole lot of factors to figure out which ones, based on your environment, would be selected for. And currently, I have no theory as to which timeframes would be the most important to look at, which would make such a search more difficult.
I wouldn’t say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So:
If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up.
Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.
Yeesh. Step out for a couple days to work on your bodyhacking and there’s a trench war going on when you get back...
In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.
This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.
Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won’t remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)
Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create “silly” attempts to get around the rule by gray-areaing it (Where people say things like “I won’t say which country, but it starts with United States and rhymes with Bymerica”) which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence
Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.
Hm. I know that the biological term may not be quite right here (although the brain is biological, scaling this idea up may be problematic) but I have wondered if certain psychological traits are not epigenetic: that is, it isn’t that you are some strange mutant if you express terminal value X strongly and someone else expresses it weakly. Rather, that our brain structures lead to a certain common set of shared values but that different environmental conditions lead to those values being expressed in a stronger or weaker sense.
So, for instance, if “status” (however that cashes out here) is highly important instrumentally in ones younger years, the brain develops that into a terminal value. If “intelligence” (again, cashing that out will be important) is highly important instrumentally in younger years, than it develops into a terminal value. It isn’t that anyone else is a horrible mutant, we probably all share values, but those values may conflict and so it may matter which traits we express more strongly. Of course, if it is anything like an epigentic phenomenon then there may be some very complicated factors to consider.
Possible falsifiers for this: if environment, particularly social environment (although evolution is dumb and it could be some mechanism that just correlates highly) in formative years does not correlate highly with terminal values later in life. If people actually do seem to share a set of values with relatively equal strength. If terminal values are often modified strongly after the majority of brain development has ceased. If some terminal values do not correlate with some instrumental value, but nevertheless vary strongly between individuals.
The fact that I won’t be able to care about it once I am dead doesn’t mean that I don’t value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don’t want future sapient life to be wiped out, and that is a statement about my current preferences, not my ‘after death’ preferences. (Which, as noted, do not exist.)
The difference is whether or not you care about sapience as instrumental or terminal values.
If I only instrumentally value other sapient beings existing, then of course, I don’t care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)
But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?
So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course I don’t care whether or not they exist after I die. But if I just think that a universe with sapient beings is better than one without because I value the existence of sapience, then that’s that.
Which is not to deny the instrumental value of other sapient beings existing. Something can have instrumental value and also be a terminal value.
I think I have a different introspection here.
When I have a feeling such as ‘doing-whats-right’ there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the ‘doing-whats-right’ emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.
But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily indicate a change in semantic content. I can, for example, give to a charity that I am not convinced is good and I still will often get the ‘doing-whats-right’ emotion even though the semantic content isn’t really there. I can also find new things I value, and occasionally I will acknowledge that I value something before I get positive emotional reinforcement. So in my experience, they aren’t identical.
I strongly suspect that if you reprogrammed my brain to value counting paperclips, it would feel the same as doing what is right. At very least, this would not be inconsistent. I might learn to attach paperclippy instead of good to that emotional state, but it would feel the same.
I am not sure that all humans have the empathy toward humanity on the whole that is assumed by Adams here.
I think you are discounting effects such as confirmation bias, which lead us to notice what we expect and can easily label while leading us to ignore information that contradicts our beliefs. If 99 out of 100 women don’t nag and 95 out of 100 men don’t nag, given a stereotype that women nag, I would expect people think of the one woman they know that nags, rather than the 5 men they know that do the same.
Frankly, without data to support the claim that:
I would find the claim highly suspect, given even a rudimentary understanding of our psychological framework.