What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
Arepo
[Question] Who created the Less Wrong Gather Town?
I would also be interested to see this. Also, could you clarify:
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
Two tools for rethinking existential risk
Another effect I’m very concerned about is the unseen effect on the funding landscape. For all EVF organisations are said to be financially independent, none of them seem to have had any issue getting funding, primarily from Open Phil (generally offering market rate or better salaries and in some cases getting millions of dollars on marketing alone), while many other EA orgs—and, contra the OP, there many more* outside the EVF/RP net than within—have struggled to get enough money to pay a couple of staff a living wage.
* That list excludes regional EA subgroups, of which there are dozens, and would no doubt be more if a small amount of funding was available.
It doesn’t matter whether you’d have been hypothetically willing to do something for them. As I said on the Facebook thread, you did not consult with them. You merely informed them they were in a game, which, given the social criticism Chris has received, had real world consequences if they misplayed. In other words, you put them in harm’s way without their consent. That is not a good way to build trust.
The downvotes on this comment seem ridiculous to me. If I email 270 people to tell them I’ve carefully selected them for some process, I cannot seriously presume they will give up >0 of their time to take part in it.
Any such sacrifice they make is a bonus, so if they do give up >0 time, it’s absurd to ask that they give up even more time to research the issue.
Any negative consequences are on the person who set up the game. Adding the justification that ‘I trust you’ does not suddenly make the recipient more obligated to the spammer.
My impression is that many similar projects are share houses or other flat hierarchies. IMO a big advantage of the model here is a top-down approach, where the trustees/manager view it as a major part of our job to limit and mitigate interpersonal conflicts, zero sum status games etc.
Whatever you call it, they’ve got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.
I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.
For what it’s worth, I have a lot of sympathy with your scepticism—I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, ‘oughts’, or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly ‘ground’ ‘factual’ questions?), but the former of whose questions people disproportionately emphasise.
[ETA] It’s also hard to pin down what the null hypothesis would be. Calling it ‘nihilism’ of any kind is just defining the problem away. For eg, if you just decide you want to do something nice for your friend—in the sense of something beneficial for her, rather than just picking an act that will give you warm fuzzies—then your presumption of what category of things would be ‘nice for her’ implicitly judges how to group states of the world. If you also feel like some things you might do would be nicer for her than others, then you’re judging how to order states of the world.
This already has the makings of a ‘moral system’, even though there’s not a ‘thou shalt’ in sight. If you further think that how she’ll react to whatever you do for her can corroborate/refute your judgement of what things are nice(r than others) for her, your system seems to have, if not a ‘realist’ element, at least a non purely antirealist/subjectivist one. It’s not utilitarianism (yet), but it seems to be heading in that sort of direction.
How do we know EY isn’t doing the same?
‘A charity that very efficiently promoted beauty and justice’ would still be a utilitarian charity (where the form of util defined utility as beauty and justice), so if that’s not EA, then EA does not = utilitarianism, QED.
Also, as Ben Todd and others have frequently pointed out, many non-utilitarian ethics subsume the value of happiness. A deontologist might want more happiness and less suffering, but feel that he also had a personal injunction against violating certain moral rules. So long as he didn’t violate those codes, he might well want to maximise efficient use of welfare.
I’d guess these effects are largely not causation, but correlation caused by conscientiousness/ambition causing both double majors and higher earnings.
Unless you’re certain of this or have some reason to suspect a factor pulling in the other direction, this still seems to suggest higher expectation from doing a double major.
Written a full response to your comments on Felicifia (I’m not going to discuss this in three different venues), but...
your opponent’s true rejection seems to be “cryonics does not work”
This sort of groundless speculation about my beliefs (and its subsequent upvoting success), a) in a thread where I’ve said nothing about them, b) where I’ve made no arguments to whose soundness the eventual success/failure of cryo would be at all relevant, and c) where the speculator has made remarks that demonstrate he hasn’t even read the arguments he’s dismissing (among other things a reductio ad absurdum to an ‘absurd’ conclusion which I’ve already shown I hold), does not make me more confident that the atmosphere on this site supports proper scepticism.
Ie you’re projecting.
Assuming you accept the reasoning, 90% seems quite generous to me. What percentage of complex computer programmes when run for the first time exhibit behaviour the programmers hadn’t anticipated? I don’t have much of an idea, but my guess would be close to 100. If so, the question is how likely unexpected behaviour is to be fatal. For any programme that will eventually gain access to the world at large and quickly become AI++, that seems (again, no data to back this up—just an intuitive guess) pretty likely, perhaps almost certain.
For any parameter of human comfort (eg 253 degrees Kelvin, 60% water, 40 hour working weeks), a misplaced decimal point misplaced by seems like it would destroy the economy at best and life on earth at worst.
If Holden’s criticism is appropriate, the best response might be to look for other options rather than making a doomed effort to make FAI – for example trying to prevent the development of AI anywhere on earth, at least until we can self-improve enough to keep up with it. That might have a low probability of success, but if FAI has sufficiently low probability, it would still seem like a better bet.
Seems like a decent reply overall, but I found the fourth point very unconvincing. Holden has said ‘what he knows know’ - to wit that whereas the world’s best experts would normally test a complicated programme by running it, isolating out what (inevitably) went wrong by examining the results it produced, rewriting it, then doing it again.
Almost no programmes are glitch free, so this is at best an optimization process and one which—as Holden pointed out—you can’t do with this type of AI. If (/when) it goes wrong the first time, you don’t get a second chance. Eliezer’s reply doesn’t seem to address this stark difference between what experts have been achieving and what SIAI is asking them to achieve.
Hm. Interesting piece. I’m partially sold, but not on this: ‘Further, I see little difference between how a Muslim “chooses” to get upset at disrespect to Mohammed, and how a Westerner might “choose” to get upset if you called eir mother a whore.’
I’m pretty content to call that a sort of choice, especially if you make it a fair comparison, ie a general remark not victimising one person that all mothers are whores. After all, there’s still a pretty big difference between that (or even the rather more inflammatory ‘all Western mothers are whores’), and (a sincerely offensive) ‘your mother is a whore’. One is basically bullying someone, assuming they’re not in a position to hurt you back equally; the other is the sort of casual prejudice that (cough) some of us discourage but don’t actually seek to ban.
On top of that, there’s a significant difference between drawing a picture of someone and drawing a picture of someone in a way calculated to piss people who like them off. In the Muhammad cartoons furore, it initially seemed to be Muslims who were trying to elide the difference – specifically by positioning the latter as very bad and the first as (almost) equally bad. If drawing the former is a political action against such a sentiment (or just an aesthetic statement, standing against those who’d repress a portrayal of something they thought was beautiful), then I hardly think it’s a reprehensible one. Here I think actual ‘whores’ - or rather porn stars—give a better analogy. Their portrayals offend a lot of people, but few sensible people think there’s a good argument for banning them a) because overturning our anti-censorship sentiments should require a pretty strong burden of evidence and b) because a lot of people very much like them, and why should they be deprived? After all, the naysayers choose not to look at something that exists, but the fans can’t do the reverse.
Lastly, (and leastly), there’s the question of accuracy of the original criticism. If your mother does sell herself for money, then, while victimising you for it is still pretty unpleasant, we would be more inclined to tolerate borderline cases of people pointing it out in a potentially offensive way than if it weren’t true. But most of the times when someone’s mum is aggressively called a whore, she probably isn’t. On the other hand, by most accounts Muhammad was a brutal sex pest, who most likely would have ordered suicide bombings had the technology existed for him to do so.
I don’t know how relevant improv is to Less Wrongers, but I find it helpful for everyday social interactions, so:
Primary recommendation: Salinsky & Frances-White’s The Improv Handbook.
Reason It’s one of the only improv books which actually suggests physical strategies for you to try out that might improve your ability rather than referring to concepts that the author has a pet phrase for that they use as a substitute for explaining what it means. Not all of the suggetions worked for me, and they’re based on primarily on anecdotal evidence (plus the selection effect of the authors having run a reasonably successful improv group in the hostile London climate and only then written a book), but I know of no other book that has as constructive an approach. It also has a number of interview sections and similar, which are eminently skippable – only half the book is really worth reading for performance advice, but fortunately the table of contents make it pretty clear which half that is.
I’m recommending it over Keith Johnstone’s ‘Impro’ and ‘Impro for Storytellers’, whose ideas it incorporates, breaks down and structures far better, over Chris Johnston’s ‘The Improvisation Game’, which is an awful mishmash of interviews and turgid academic writing, over Charna Halpern’s ‘Truth in Comedy’, which has quite a different set of ideas but spends more time boasting about how good they are than explaining them, over Jimmy Carrane and Liz Allen’s Improvising Better, which has a few nice tips and is mercifully short, but doesn’t have anything close to a coherent set of principles, ‘The Improvisation Book’, which I haven’t read in depth but seems to be little more than a list of games, and Dan Patterson and Mark Leveson’s ‘Whose Line is It Anyway’, which unsurprisingly is very heavily focused on emulating the restrictive format of the show of the same name.
Secondary recommendation: Mick Napier’s Improvise, which comes from a different school of thought to TIH’s – the same one as ‘Truth in Comedy’.
Reason It’s the only one of any of those I’ve mentioned (TIH included) to explicitly suggest scientific reasoning in developing and assessing improv methods. After the author’s initial proclamation to that effect, he doesn’t really communicate how he’s tried to do so, and his advice seems to assume you’re already quite comfortable with being in an unspecified scene with no preset rules (one of the hardest things for an improviser to find himself in, IME), so I wouldn’t recommend it as a beginner’s guide.
How does one create an open thread? The only options I had available were this and comments. Is it something you need minimum karma for?
Did the ringing go away over time, or was it permanent?