An authoritative textbook-style index/survey-article on eveything in LW. We have been generating lots of really cool intellectual work, but without a prominently placed, complete, hierarchical, and well-updated overview of “here’s the state of what we know”, we arent accumulating knowledge. This is a big project and I don’t know how I could make it happen, besides pushing the idea, which is famously ineffective.
LW needs a king. This idea is bound to be unpopular, but how awesome would it be to have someone who’s paid job it was to make LW into an awesome and effective community. I imagine things like getting proper studies done of how site layout/design should be to make LW easy to use and sticky to the right kind of people (currently sucks), contacting, coordinating, and encourageing meetup organizers individually (no one does this right now and lw-organizers has little activity), thinking seriously and strategically about problems like OP, and leading big projects like idea #1. Obviously this person would have CEO-level authority.
One problem is that our really high-power agent types who are super dedicated to the community (i.e. lukeprog) get siphoned off into SI. We need another lukeprog or someone to be king of LW and deal with this kind of stuff.
Without a person in this king role, the community has to waste time and effort making community-meta threads like these. Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need. It really isn’t the comparative advantage of the community to be having to manage these problems. If these problems were dealt with, it would be a lot easier to focus on intellectual productivity.
Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need.
Kings also suck at it, in the average. Of course, if we are lucky and find a good king… the only problem is that king selection is the kind of strategic decision humans suck at.
They should be self-selected, then we don’t have to rely on the community at large.
There’s this wonderful idea called “Do-ocracy” where everyone understands that the people actually willing to do things get all the say as to what gets done. This is where benevolent dictators like Linus Torvalds get there power.
Our democratic training has taught us to think this idea is a recipe for totalitarian disaster. The thing is, even if the democratic memplex were right in it’s injunction against authority, a country and an internet community are entirely different situations.
In a country, if you had king-power, you have military and law power as well, and can physically coerce people to do what you want. There is enough money and power at stake to make it so most of the people who want to do the job are in it for the money and power, not the public good. Thus measures like heritable power (at least you’re not selecting for power-hunger), and democracy (now we’re theoretically selecting for public support).
On the other hand, in a small artificial community like a meetup, a hackerspace, or lesswrong, there is no military to control, the banhammer is much less power than the noose or dungeon, and there is barely anything to gain by embezzling taxes (as a meetup organizer, I could embezzle about $30 a month...). At worst, a corrupt monarch could ban all the good people and destroy the community, but the incentive do do damage to the community is roughly “for the lulz”. Lulz is much cheaper elsewhere. The amount of damage is highly limited by the fact that, in the absence of military power, the do-ocrat’s power over people is derived from respect, which would rapidly fall off if they did dumb things. On the other hand, scope insensitivity makes the apparent do-gooder motivation just as high. So in a community like this, most of the people willing to do the job will be those motivated to do public good and those agenty enough to do it, so self-selection (do-ocracy) works and we don’t need other measures.
There’s this wonderful idea called “Do-ocracy” where everyone understands that the people actually willing to do things get all the say as to what gets done. … Our democratic training has taught us to think this idea is a recipe for totalitarian disaster.
I can’t speak for your democratic training, but my democratic training has absolutely no problem with acknowledging merits and giving active people trust proportional to their achievements and letting them decide what more should be done.
It has become somewhat fashionable here, in the Moldbuggian vein, to blame community failures on democracy. But what particular democratic mechanisms have caused the lack of strategic decisions on LW? Which kind of decisions? I don’t see much democracy here—I don’t recall participating in election, for example, or voting on a proposed policy, or seeing a heated political debate which prevented a beneficial resolution to be implemented. I recall recent implementation of the karma penalty feature, which lot of LWers were unhappy about but was put in force nevertheless in a quite autocratic manner. So perhaps the lack of strategic decisions is caused by the fact that
there just aren’t people willing to even propose what should be done
nobody has any reasonable idea what strategic decision should be made (it is one thing to say what kind of decisions should be made—e.g. “we should choose an efficient site design”, but a rather different thing to make the decision in detail—e.g. “the front page should have a huge violet picture of a pony on it”)
people aren’t willing to work for free
Either of those has little to do with democracy. I am pretty sure that if you volunteer to work on whichever of your suggestions (contacting meetup organisers, improving the site design...), nobody would seriously object and you would easily get some official status on LW (moderator style). To do anything from the examples you have mentioned you wouldn’t need dictatorial powers.
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it’s only because an online community is generally less important to people’s lives than society at large.
A bad king can absolutely destroy an online community. Banning all the good people is actually one of the better things a bad king can do, because it can spark an organized exodus, which is just inconvenient. But by adding restrictions and terrorizing the community with the threat of bans, a bad king can make the good people self-deport. And then the community can’t be revived elsewhere.
At worst, a corrupt monarch could … destroy the community, but the incentive do do damage to the community is roughly “for the lulz”. Lulz is much cheaper elsewhere.
I admit, I have seen braindead moderators tear a community apart (/r/anarchism for one).
I have just as often seen lack of moderation prevent a community from becoming what it could. (4chan (though I’m unsure whether 4chan is glorious or a cesspool))
And I have seen strong moderation keep a community together.
The thing is, death by incompetent dictator is much more salient to our imaginations than death by slow entropy and september-effects. incompetent dictators have a face which makes us take it much more seriously than an unbiased assessment of the threats would warrant.
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it’s only because an online community is generally less important to people’s lives than society at large.
There’s a big difference between exile and prison, and the power of exile depends on the desirability of the place in question.
There is a particular princess in the local memespace with nigh-absolute current authority.
edited to clarify: by ‘local memespace’ I mean the part of the global memespace that is in use locally, not that there’s something we have going that isn’t known more broadly
If you image-search ‘obey princess’, you will get a hint. Note, the result is… an alicorn.
But more seriously (still not all that seriously), there would be collossal PR and communication disadvantages given by naming a king, that would be mostly dodged by naming a princess.
In particular, people would probably overinterpret king, but file princess under ‘wacky’. This would not merely dodge, but could help against the ‘cold and calculating’ vibe some people get.
Our benevolent dictator isn’t doing much dictatoring. If I understand correctly that it’s EY, he has a lot more hats to wear, and doesn’t have the time to do LW-managing full time.
Is he willing to improve LW, but not able? Then he is not a dictator. Is he able, but not willing? Then he is not benevolent. Is he both willing and able? Then whence cometh suck? Is he neither willing nor able? Then why call him God?
As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.
You’re defending yourself against accusations of being a phyg leader over there and over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, “IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL.” Ceteris paribus, I’d prefer that working environment if offered.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You’ve endured the ravages of the hatchet job and Rationalwiki’s annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet.
What’s more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you’re brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking:
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper … except occasionally when you save the world from sin, death, and hell.
I feel the same way as a web developer. One day I’m being told I’m a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I’m writing a script to compensate for the incompetent failures of a certain vendor.
When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism?
I don’t see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn’t taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image.
No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer’s power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don’t perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward.
For my part, however, I don’t have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke’s job harder. Just… as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning.
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
The way I make such distinctions is to basically ignore ‘superficial arrogance’. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others—particularly those others without the power to ‘force’ them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
My response was more about what interpretations are possible than what interpretation I took.
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
I really like this nice, clear, direct observation.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means).
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
In this case, I would classify that as a “social skills / character flaw induced faux pas”.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?”
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
“Does Eliezer intend to build something this powerful?”
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
“Do you and I agree/disagree that it’s a good idea to build something this powerful
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
/ that it can be controlled?”
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
I really like this nice, clear, direct observation.
Thank you. I will try to do more of that.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, “benevolent dictator” and the like; it has no implication at all.
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted.
Also, I am interpreting at that comment within the context of other things. The “arrogance problem” thread, the b - - - - - - k, Eliezer’s dating profile, etc.
What’s not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
Let’s go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won’t work.
In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn’t invariably the answer. That was my only point.
I don’t think a CEO level monarch is necessary though I don’t know what job title a community “gardener” would map to. Do you think a female web developer who obviously cares a lot about LW and can implement solutions would be a good choice?
This doesn’t look like it’s very likely to happen though, considering that they’re changing focus:
female web developer who obviously cares a lot about LW and can implement solutions would be a good choice?
Female doesn’t matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better.
Whether or not we need someone with CEO-power is uninteresting. I think such a person having more power is good.
If you’re talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I’ve said before in the generic, but in this case we can be specific: If you declare yourself king, I’ll kneel.
I’m opposed to appointing her as any sort of actual-power-having-person.
The personal antipathy there has been distinctly evident to any onlookers who are mildly curious about how status and power tends to influence human behavior and thought.
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative—I cannot make policy.)
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative—I cannot make policy.)
I agree and note that it is even more important that people with personal conflicts don’t have the power (or, preferably, voluntarily waive the power) to actively take specific actions against their personal enemies.
(Mind you, the parent also seems somewhat out of place in the context and very nearly comical given the actual history of power abuses on this site.)
Female doesn’t matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better.
Well I do have the audacity.
If you’re talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I would love to do that, but I’ve just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce.
However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it.
If you declare yourself king, I’ll kneel.
I don’t believe in this. I am with Eliezer on sentiments like the following:
In Two More Things to Unlearn from School he warns his readers that “It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism.”
In Cached Thoughts he tells you to question what HE says. “Now that you’ve read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!”
I would love to do that, but I’ve just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce.
grumble grumble. Like I said, everyone who could is doing something else. Me too.
However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it.
I don’t think they’ll take the initiative on this. Maybe you approach them?
I don’t believe in this. I am with Eliezer on sentiments like the following:
I don’t see how those relate.
But thank you.
Thank you for giving a shit about LW, and trying to do something good. I see that you’re actively engaging in the discussions in this thread and that’s good. So thanks.
grumble grumble. Like I said, everyone who could is doing something else. Me too.
Yeah. Well maybe a few of us will throw a few things at it and that’ll keep it going...
I don’t think they’ll take the initiative on this. Maybe you approach them?
I mentioned a couple times that I’m dying to have online rationality training materials and that I want them badly enough I am half ready to run off and make them myself. I said something like “I’d consider doing this for free or giving you a good deal on freelance depending on project size”. Nobody responded.
I don’t see how those relate.
Simply put: I’m not the type that wants obedience. I’m the type that wants people to think for themselves.
Thank you for giving a shit about LW, and trying to do something good. I see that you’re actively engaging in the discussions in this thread and that’s good. So thanks.
Aww. I think that’s the first time I’ve felt appreciated for addressing endless September. (: feels warm and fuzzy
Simply put: I’m not the type that wants obedience. I’m the type that wants people to think for themselves.
Please allow me to change your mind. I am not the type who likes obedience either. I agree that thinking for selves is good, and that we should encourage as much of it as possible. However, this does not negate the usefulness of authority:
Argument 1:
Life is big. Bigger than the human mind can reasonable handle. I only have so much attention to distribute around. Say I’m a meetup participant. I could devote some attention to monitoring LW, the mailing list, etc until a meetup was posted, then overcome the activation energy to actually go. Or, the meetup organizer could mail me and say “Hi Nyan, come to Xday’s meetup”, then I just have to go. I don’t have to spend as much attention on the second case, so I have more to spend on thinking-for-myself that matters, like figuring out whether the mainstream assumptions about glass are correct.
So in that way, having someone to tell me what to think and do reduces the effort I have to spend on those things, and makes me more effective at the stuff I really care about. So I actually prefer it.
Argument 2:
Even if I had infinite capacity for thinking for myself and going my own way, sometimes it just isn’t the right tool for the job. Thinking for myself doesn’t let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy.
So even if I’m a big fan of self-sufficiency and skepticism, I appreciate authority where available. Does this make sense?
Replies to downvoted comments blah blah blah
Perhaps we should continue this conversation somewhere more private… /sleaze
Please allow me to change your mind. I am not the type who likes obedience either.
Well that is interesting and unexpected.
Argument 1:
This seems to be more of a matter of notification strategies—one where you have to check a “calendar” and one where the “calendar” comes to you. I am pattern-matching the concept “reminder” here. It seems to me that reminders, although important and possibly completely necessary for running a functional group, would be more along the lines of a behavioral detail as opposed to a fundamental leadership quality. I don’t know why you’re likening this to obedience.
Even if I had infinite capacity for thinking for myself
We do not have infinite capacity for critical thinking. True. I don’t call trusting other people’s opinions obedience. I call it trust. That is rare for me. Very rare for anything important. Next door to trust is what I do when I’m short on time or don’t have the energy: I half-ass it. I grab someone’s opinion, go “Meh, 70% chance they’re right?” and slap it in.
I don’t call that obedience, either.
I call it being overwhelmingly busy.
Thinking for myself doesn’t let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy.
Organizing trivial details is something I call organizing. I don’t call it obedience.
When I think of obedience I think of that damned nuisance demand that punishes me for being right. This is not because I am constantly right—I’m wrong often enough. I have observed, though, that some people are more interested in power than in wielding it meaningfully. They don’t listen and use power as a way to avoid updating (leading them to be wrong frequently). They demand this thing “obedience” and that seems to be a warning that they are about act as if might makes right.
My idea of leadership looks like this:
If you want something new to happen, do it first. When everyone else sees that you haven’t been reduced to a pile of human rubble by the new experience, they’ll decide the “guinea pig” has tested it well enough that they’re willing to try it, too.
If you really want something to get done, do it your damn self. Don’t wait around for someone else to do it, nag others, etc.
If you want others to behave, behave well first. After you have shown a good intent toward them, invite them to behave well, too. Respect them and they will usually respect you.
If there’s a difficulty, figure out how to solve it.
Give people something they want repeatedly and they come back for it.
If people are grateful for your work, they reciprocate by volunteering to help or donating to keep it going.
To me, that’s the correct way of going about it. Using force (which I associate with obedience) or expecting people not to have thoughts of their own is not only completely unnecessary but pales in comparison effectiveness-wise.
Maybe my ideas about obedience are completely orthogonal to yours. If you still think obedience has some value I am unaware of, I’m curious about it.
if you want to continue...
Thank you for your interest. It feels good.
I have a romantic interest right now who, although we have not officially deemed our status a “relationship” are considering one another as potential seriously partners.
This came to both of us as a surprise. I had burned out on dating and deleted my dating profile. I was like:
insane amount of dating alienation * ice cube’s chance of finding compatible partner > benefits of romance
(Narratives by LW Women thread if you want more)
And so now we’re like … wow this amount of compatibility is special. We should not waste the momentum by getting distracted by other people. So we decided that in order to let the opportunity unfold naturally, we would avoid pursuing other serious romantic interests for now.
So although I am technically available, my expected behavior, considering how busy I am, would probably be best classified as “dance card full”.
We seem to have different connotations on “obedience”, and might be talking about slightly different concepts. You’re observations about how most people use power, and the bad kind of obedience, are spot-on.
The topic came up because of the “I’d kneel to anyone who declared themselves king” thing. I don’t think such a behaviour pattern has to go to bad power abusing obedience and submission. I think it’s just a really strategically useful thing to support someone who is going to act as the group-agency. You seem to agree on the important stuff and we’re just using different words. case closed?
romantic.
lol what? Either you or me has utterly misunderstood something because I’m utterly confused. I made a mock-sleazy joke about the goddam troll toll, and suggested that we wouldn’t have to pay it but we could still discuss if we PMed instead. And then suddenly this romantic thing. OhgodwhathaveIdone.
You seem to agree on the important stuff and we’re just using different words. case closed?
Yeah I think the main difference may be that I am very wary of power abuse, so I avoid using terms like “obedience” and “kneeling” and “king” and choose other terms that imply a situation where power is balanced.
lol what? Either you or me has utterly misunderstood something
Sorry, I think I must have misread that. I’ve been having problems sleeping lately. If you want to talk in PM to avoid the troll toll go ahead.
Here’s two things we desperately need:
An authoritative textbook-style index/survey-article on eveything in LW. We have been generating lots of really cool intellectual work, but without a prominently placed, complete, hierarchical, and well-updated overview of “here’s the state of what we know”, we arent accumulating knowledge. This is a big project and I don’t know how I could make it happen, besides pushing the idea, which is famously ineffective.
LW needs a king. This idea is bound to be unpopular, but how awesome would it be to have someone who’s paid job it was to make LW into an awesome and effective community. I imagine things like getting proper studies done of how site layout/design should be to make LW easy to use and sticky to the right kind of people (currently sucks), contacting, coordinating, and encourageing meetup organizers individually (no one does this right now and lw-organizers has little activity), thinking seriously and strategically about problems like OP, and leading big projects like idea #1. Obviously this person would have CEO-level authority.
One problem is that our really high-power agent types who are super dedicated to the community (i.e. lukeprog) get siphoned off into SI. We need another lukeprog or someone to be king of LW and deal with this kind of stuff.
Without a person in this king role, the community has to waste time and effort making community-meta threads like these. Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need. It really isn’t the comparative advantage of the community to be having to manage these problems. If these problems were dealt with, it would be a lot easier to focus on intellectual productivity.
LW as a place to test applied moldbuggery, right?
Kings also suck at it, in the average. Of course, if we are lucky and find a good king… the only problem is that king selection is the kind of strategic decision humans suck at.
They should be self-selected, then we don’t have to rely on the community at large.
There’s this wonderful idea called “Do-ocracy” where everyone understands that the people actually willing to do things get all the say as to what gets done. This is where benevolent dictators like Linus Torvalds get there power.
Our democratic training has taught us to think this idea is a recipe for totalitarian disaster. The thing is, even if the democratic memplex were right in it’s injunction against authority, a country and an internet community are entirely different situations.
In a country, if you had king-power, you have military and law power as well, and can physically coerce people to do what you want. There is enough money and power at stake to make it so most of the people who want to do the job are in it for the money and power, not the public good. Thus measures like heritable power (at least you’re not selecting for power-hunger), and democracy (now we’re theoretically selecting for public support).
On the other hand, in a small artificial community like a meetup, a hackerspace, or lesswrong, there is no military to control, the banhammer is much less power than the noose or dungeon, and there is barely anything to gain by embezzling taxes (as a meetup organizer, I could embezzle about $30 a month...). At worst, a corrupt monarch could ban all the good people and destroy the community, but the incentive do do damage to the community is roughly “for the lulz”. Lulz is much cheaper elsewhere. The amount of damage is highly limited by the fact that, in the absence of military power, the do-ocrat’s power over people is derived from respect, which would rapidly fall off if they did dumb things. On the other hand, scope insensitivity makes the apparent do-gooder motivation just as high. So in a community like this, most of the people willing to do the job will be those motivated to do public good and those agenty enough to do it, so self-selection (do-ocracy) works and we don’t need other measures.
I can’t speak for your democratic training, but my democratic training has absolutely no problem with acknowledging merits and giving active people trust proportional to their achievements and letting them decide what more should be done.
It has become somewhat fashionable here, in the Moldbuggian vein, to blame community failures on democracy. But what particular democratic mechanisms have caused the lack of strategic decisions on LW? Which kind of decisions? I don’t see much democracy here—I don’t recall participating in election, for example, or voting on a proposed policy, or seeing a heated political debate which prevented a beneficial resolution to be implemented. I recall recent implementation of the karma penalty feature, which lot of LWers were unhappy about but was put in force nevertheless in a quite autocratic manner. So perhaps the lack of strategic decisions is caused by the fact that
there just aren’t people willing to even propose what should be done
nobody has any reasonable idea what strategic decision should be made (it is one thing to say what kind of decisions should be made—e.g. “we should choose an efficient site design”, but a rather different thing to make the decision in detail—e.g. “the front page should have a huge violet picture of a pony on it”)
people aren’t willing to work for free
Either of those has little to do with democracy. I am pretty sure that if you volunteer to work on whichever of your suggestions (contacting meetup organisers, improving the site design...), nobody would seriously object and you would easily get some official status on LW (moderator style). To do anything from the examples you have mentioned you wouldn’t need dictatorial powers.
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it’s only because an online community is generally less important to people’s lives than society at large.
A bad king can absolutely destroy an online community. Banning all the good people is actually one of the better things a bad king can do, because it can spark an organized exodus, which is just inconvenient. But by adding restrictions and terrorizing the community with the threat of bans, a bad king can make the good people self-deport. And then the community can’t be revived elsewhere.
I admit, I have seen braindead moderators tear a community apart (/r/anarchism for one).
I have just as often seen lack of moderation prevent a community from becoming what it could. (4chan (though I’m unsure whether 4chan is glorious or a cesspool))
And I have seen strong moderation keep a community together.
The thing is, death by incompetent dictator is much more salient to our imaginations than death by slow entropy and september-effects. incompetent dictators have a face which makes us take it much more seriously than an unbiased assessment of the threats would warrant.
There’s a big difference between exile and prison, and the power of exile depends on the desirability of the place in question.
Why “king” rather than “monarch”? Couldn’t a queen do that?
Yes, and a queen could move more than one space in a turn, too.
For obvious decision theoretic reasons, a king is necessary. However, the king does not have to be a man.
Maybe “Princess” would be best, considering everything.
hmmm.. no It definitely has to be a word that implies current authority, not future authority.
There is a particular princess in the local memespace with nigh-absolute current authority.
edited to clarify: by ‘local memespace’ I mean the part of the global memespace that is in use locally, not that there’s something we have going that isn’t known more broadly
I am getting this “whoosh” feeling but I still can’t see it.
If you image-search ‘obey princess’, you will get a hint. Note, the result is… an alicorn.
But more seriously (still not all that seriously), there would be collossal PR and communication disadvantages given by naming a king, that would be mostly dodged by naming a princess.
In particular, people would probably overinterpret king, but file princess under ‘wacky’. This would not merely dodge, but could help against the ‘cold and calculating’ vibe some people get.
Luke_A_Somers is referring to Princess Dumbledore, from Harry Potter and the Methods of Rationality, chapter 86.
I’d love to read that chapter!
(Almost certainly a reference to the animated series My Little Pony: Friendship Is Magic, in which Princess Celestia rules the land of Equestria.)
Let’s just say BDFL (Benevolent Dictator For Life)...
Insufficiently wacky - would invite accusations of authoritarianism/absolutism from the clue impaired.
“CEO” could work. I just like the word “king”. a queen would do just as well.
Now you’re just talking crazy.
The queen’s duty is to secure the royal succession!
The standard term is Benevolent Dictator for Life, and we already have one. What you’re asking for strikes me as more of a governor-general.
Our benevolent dictator isn’t doing much dictatoring. If I understand correctly that it’s EY, he has a lot more hats to wear, and doesn’t have the time to do LW-managing full time.
As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.
Okay, that? That was one of the most awesome predicates of which I’ve ever been a subject.
You’re defending yourself against accusations of being a phyg leader over there and over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
Have you forgotten Luke’s efforts to fight the perceptions of SI’s arrogance?
That you appear to be encouraging a comment that uses the word god to refer to you in any way, directly or indirectly, is pretty disheartening.
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, “IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL.” Ceteris paribus, I’d prefer that working environment if offered.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You’ve endured the ravages of the hatchet job and Rationalwiki’s annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet.
What’s more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you’re brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking:
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper … except occasionally when you save the world from sin, death, and hell.
I feel the same way as a web developer. One day I’m being told I’m a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I’m writing a script to compensate for the incompetent failures of a certain vendor.
When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
The only way to decide whether someone has a serious issue is to read a bunch from them and then see which patterns you find.
I don’t see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn’t taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image.
No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer’s power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don’t perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward.
For my part, however, I don’t have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke’s job harder. Just… as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning.
The way I make such distinctions is to basically ignore ‘superficial arrogance’. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others—particularly those others without the power to ‘force’ them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
srsly, brah. I think you misunderstood me.
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
My response was more about what interpretations are possible than what interpretation I took.
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I really like this nice, clear, direct observation.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
Thank you. I will try to do more of that.
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, “benevolent dictator” and the like; it has no implication at all.
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted.
Also, I am interpreting at that comment within the context of other things. The “arrogance problem” thread, the b - - - - - - k, Eliezer’s dating profile, etc.
What’s not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
Why would you believe that something is always the solution when you already have evidence that it doesn’t always work?
Let’s go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won’t work.
In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn’t invariably the answer. That was my only point.
I don’t think a CEO level monarch is necessary though I don’t know what job title a community “gardener” would map to. Do you think a female web developer who obviously cares a lot about LW and can implement solutions would be a good choice?
This doesn’t look like it’s very likely to happen though, considering that they’re changing focus:
For 12 years we’ve largely focused on movement-building through the Singularity Summit, Less Wrong, and other programs… But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Then again maybe CFAR will want to do something.
I think you meant to use a different hyperlink?
It has been fixed. Thanks, Curiouskid!
In general, the kinds of people that (strongly) hint that they should have power should...not...ever....have...power.
Female doesn’t matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better.
Whether or not we need someone with CEO-power is uninteresting. I think such a person having more power is good.
If you’re talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I’ve said before in the generic, but in this case we can be specific: If you declare yourself king, I’ll kneel.
(good luck)
I’m opposed to appointing her as any sort of actual-power-having-person. Epiphany is a relative newcomer who makes a lot of missteps.
I agree that appointing her would be a bad idea.
I see no problem with encouraging people (in this case, her) to become the kind of person we should appoint.
The personal antipathy there has been distinctly evident to any onlookers who are mildly curious about how status and power tends to influence human behavior and thought.
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative—I cannot make policy.)
I agree and note that it is even more important that people with personal conflicts don’t have the power (or, preferably, voluntarily waive the power) to actively take specific actions against their personal enemies.
(Mind you, the parent also seems somewhat out of place in the context and very nearly comical given the actual history of power abuses on this site.)
Well I do have the audacity.
I would love to do that, but I’ve just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce.
However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it.
I don’t believe in this. I am with Eliezer on sentiments like the following:
In Two More Things to Unlearn from School he warns his readers that “It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism.”
In Cached Thoughts he tells you to question what HE says. “Now that you’ve read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!”
But thank you. (:
grumble grumble. Like I said, everyone who could is doing something else. Me too.
I don’t think they’ll take the initiative on this. Maybe you approach them?
I don’t see how those relate.
Thank you for giving a shit about LW, and trying to do something good. I see that you’re actively engaging in the discussions in this thread and that’s good. So thanks.
Yeah. Well maybe a few of us will throw a few things at it and that’ll keep it going...
I mentioned a couple times that I’m dying to have online rationality training materials and that I want them badly enough I am half ready to run off and make them myself. I said something like “I’d consider doing this for free or giving you a good deal on freelance depending on project size”. Nobody responded.
Simply put: I’m not the type that wants obedience. I’m the type that wants people to think for themselves.
Aww. I think that’s the first time I’ve felt appreciated for addressing endless September. (: feels warm and fuzzy
Please allow me to change your mind. I am not the type who likes obedience either. I agree that thinking for selves is good, and that we should encourage as much of it as possible. However, this does not negate the usefulness of authority:
Argument 1:
Life is big. Bigger than the human mind can reasonable handle. I only have so much attention to distribute around. Say I’m a meetup participant. I could devote some attention to monitoring LW, the mailing list, etc until a meetup was posted, then overcome the activation energy to actually go. Or, the meetup organizer could mail me and say “Hi Nyan, come to Xday’s meetup”, then I just have to go. I don’t have to spend as much attention on the second case, so I have more to spend on thinking-for-myself that matters, like figuring out whether the mainstream assumptions about glass are correct.
So in that way, having someone to tell me what to think and do reduces the effort I have to spend on those things, and makes me more effective at the stuff I really care about. So I actually prefer it.
Argument 2:
Even if I had infinite capacity for thinking for myself and going my own way, sometimes it just isn’t the right tool for the job. Thinking for myself doesn’t let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy.
So even if I’m a big fan of self-sufficiency and skepticism, I appreciate authority where available. Does this make sense?
Perhaps we should continue this conversation somewhere more private… /sleaze
PM me if you want to continue this thread.
Well that is interesting and unexpected.
This seems to be more of a matter of notification strategies—one where you have to check a “calendar” and one where the “calendar” comes to you. I am pattern-matching the concept “reminder” here. It seems to me that reminders, although important and possibly completely necessary for running a functional group, would be more along the lines of a behavioral detail as opposed to a fundamental leadership quality. I don’t know why you’re likening this to obedience.
We do not have infinite capacity for critical thinking. True. I don’t call trusting other people’s opinions obedience. I call it trust. That is rare for me. Very rare for anything important. Next door to trust is what I do when I’m short on time or don’t have the energy: I half-ass it. I grab someone’s opinion, go “Meh, 70% chance they’re right?” and slap it in.
I don’t call that obedience, either.
I call it being overwhelmingly busy.
Organizing trivial details is something I call organizing. I don’t call it obedience.
When I think of obedience I think of that damned nuisance demand that punishes me for being right. This is not because I am constantly right—I’m wrong often enough. I have observed, though, that some people are more interested in power than in wielding it meaningfully. They don’t listen and use power as a way to avoid updating (leading them to be wrong frequently). They demand this thing “obedience” and that seems to be a warning that they are about act as if might makes right.
My idea of leadership looks like this:
If you want something new to happen, do it first. When everyone else sees that you haven’t been reduced to a pile of human rubble by the new experience, they’ll decide the “guinea pig” has tested it well enough that they’re willing to try it, too.
If you really want something to get done, do it your damn self. Don’t wait around for someone else to do it, nag others, etc.
If you want others to behave, behave well first. After you have shown a good intent toward them, invite them to behave well, too. Respect them and they will usually respect you.
If there’s a difficulty, figure out how to solve it.
Give people something they want repeatedly and they come back for it.
If people are grateful for your work, they reciprocate by volunteering to help or donating to keep it going.
To me, that’s the correct way of going about it. Using force (which I associate with obedience) or expecting people not to have thoughts of their own is not only completely unnecessary but pales in comparison effectiveness-wise.
Maybe my ideas about obedience are completely orthogonal to yours. If you still think obedience has some value I am unaware of, I’m curious about it.
Thank you for your interest. It feels good.
I have a romantic interest right now who, although we have not officially deemed our status a “relationship” are considering one another as potential seriously partners.
This came to both of us as a surprise. I had burned out on dating and deleted my dating profile. I was like:
insane amount of dating alienation * ice cube’s chance of finding compatible partner > benefits of romance
(Narratives by LW Women thread if you want more)
And so now we’re like … wow this amount of compatibility is special. We should not waste the momentum by getting distracted by other people. So we decided that in order to let the opportunity unfold naturally, we would avoid pursuing other serious romantic interests for now.
So although I am technically available, my expected behavior, considering how busy I am, would probably be best classified as “dance card full”.
We seem to have different connotations on “obedience”, and might be talking about slightly different concepts. You’re observations about how most people use power, and the bad kind of obedience, are spot-on.
The topic came up because of the “I’d kneel to anyone who declared themselves king” thing. I don’t think such a behaviour pattern has to go to bad power abusing obedience and submission. I think it’s just a really strategically useful thing to support someone who is going to act as the group-agency. You seem to agree on the important stuff and we’re just using different words. case closed?
lol what? Either you or me has utterly misunderstood something because I’m utterly confused. I made a mock-sleazy joke about the goddam troll toll, and suggested that we wouldn’t have to pay it but we could still discuss if we PMed instead. And then suddenly this romantic thing. OhgodwhathaveIdone.
That’s good. :)
Yeah I think the main difference may be that I am very wary of power abuse, so I avoid using terms like “obedience” and “kneeling” and “king” and choose other terms that imply a situation where power is balanced.
Sorry, I think I must have misread that. I’ve been having problems sleeping lately. If you want to talk in PM to avoid the troll toll go ahead.
Well not anymore. laughs at self