A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?
Gedusa
In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it’s easier to create a genome from scratch.
I agree it’s an open question though!
An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.
I would find this more useful if you spelled out a bit more about your scoring method. You say:
They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.
Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)
And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)
a dog which is as close to being a wolf as one can get without sacrificing any of those essential characteristics which define a dog as such
Why do you think a dog that is close to a wolf is objectively better than dogs which are further away?
OpenPhil gave Carl Shulman $5m to re-grant
I didn’t realise this was happening. Is there somewhere we can read about grants from this fund when/if they occur?
Would this approach have any advantages vs brain uploading? I would assume brain uploading to be much easier than running a realistic evolution simulation, and we would have to worry less about alignment.
I filled in the survey! Like many people I didn’t have a ruler to use for the digit ratio question.
Also, I’m torn between how to interpret Snape’s last question—my first thought was that he was verifying the truth of a story he had been told(“Your master tortured her, now join the light side already!” being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.
Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape’s character.
Slightly off topic, but I’m very interested in the “policy impact” that FHI has had—I had heard nothing about it before and assumed that it wasn’t having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.
Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn’t multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).
A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.
Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC’s $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.
Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)
“Hell Futures—When is it better to be extinct?” (not entirely serious)
Recommendations are up!
Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.
And maybe we could use certain ems as gatekeepers—the AI wouldn’t have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.
Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).
I was the one who asked that question!
I was slightly disappointed by his answer—surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.
I guess what I’m really thinking is that it’s pretty unlikely that the two charities are equally optimal.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I thought this article was for SL0 people—that would give it the widest audience possible, which I thought was the point?
If it’s aimed at the SL0′s, then we’d be wanting to go for an SL1 image.
Whilst I really, really like the last picture—it seems a little odd to include it in the article.
Isn’t this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn’t the picture sort of act against that—by being slightly sci-fi and weird?
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I’m a little concerned that it hasn’t been dealt with properly by SIAI folks—aside from a few comments by Carl Shulman on Katja’s blog.
I suspect the answer may be something to do with anthropics—but I’m not really certain of exactly what it is.
I found it really helpful to have a list of places where Eliezer and Paul agree. It’s interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.