New Q&A by Nick Bostrom
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
Transcribing.
Will more people please vote this up? I think we should be strongly reinforcing this kind of behavior.
Thanks for the karma, everyone. :)
I’m just about done, and I expect it to be up tonight or tomorrow (depending on if I’m going to work on it more tonight, or watch some Terra Nova and Glee instead, lol).
It’s pretty big, so I’ll be posting it as a new discussion post, rather than as a comment under this one.
Oh good. Another chance to give you more karma for this.
I’m struck by the irony of a human manually transcribing a talk that focuses extensively on the dangers of AI.
Not ironic. More… appropriate.
Great work.
Oh, excellent. The video is hard to understand.
(For the reference: transcription of the video is posted here.)
Wonderful, especially at the start it was hard to understand.
Pleased to see that when asked about the relationship of FHI and SIAI, Nick gives the same answer I did.
I was the one who asked that question!
I was slightly disappointed by his answer—surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.
I guess what I’m really thinking is that it’s pretty unlikely that the two charities are equally optimal.
It seems that argument applies primarily to well-defined goals. Do you necessarily have to view the SI and FHI as two charities? The SI is currently pursuing a wide range of sub-goals, e.g. rationality camps. I perceive the FHI to be mainly about researching existential risks in general. Clearly you should do your own research and then decide which x-risk is the most urgent one and then support its mitigation. Yet you should also reassess your decision from time to time. And here I think it might be justified to contribute part of your money to the FHI. By doing so you can externalize the review of existential risks. You concentrate most of your effort on the risk that the FHI deems most urgent until it does revise its opinion.
In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.
Isn’t it a bit of a conflict of interest to have a charity with two departments, one of which is responsible for deciding if the other one is the best charity to donate money to?
Four!
“I’d rather live with a good question than a bad answer.” -- Aryeh Frimer
I am not sure how to interpret your comment:
I gave a bad answer to a good question.
You’d rather support the FHI exclusively as they are asking the right questions, whereas the SI might give a bad answer.
I’ll comment on the first interpretation that I deem most likely.
To fix complex problems you have to solve many other problems at the same time, problems that are either directly relevant to the bigger problem or necessitated by other needs.
That the Singularity Institute might be best equipped to solve the friendly AI problem does not mean that they are the best choice to research general questions about existential risks. That risks from AI are the most urgent existential risk does not mean that it would be wise to abandon existential risk research until friendly AI is solved.
By contributing to the Singularity Institute you are supporting various activities that you might not equally value. If you thought that they knew better than you how to distribute your money among those activities, you wouldn’t mind. But that they are good at doing one thing does not mean that they are good at doing another.
Now you might argue that even less of your money would be spend on the activity you value the most if you were going to distribute it among different charities. But that’s not relevant here. Existential risk research is something you have to do anyway, something you have to invest a certain amount of resources into while pursuing your main objective, just like eating and drinking. If the Singularity Institute isn’t doing that for you then you have to it yourself, or, in the case of existential risk research, pay others to do it for you who are better at it.
The second quote mentions the number four; wedrifid was referring to that, not the number four.
Aha! I didn’t even read the other quotes and just went straight to quote number four.
I don’t think that suggesting new definitions for words is problematic if it helps. In the case of calling a tail a leg it would deprive the word leg of most of its meaning. But the case of calling two charities departments of a single charity highlights a problem with Steven Landsburg’s advice for charitable giving:
This disregards the fact that problems like cancer, heart disease or hunger consist of a huge amount of sub-problems, many of which need to be tackled at the same time to make the main objective technically feasible.
What if you were able to assign weight to the various problems that need to be solved in order to reach the charity’s overall goal? You would do so if you didn’t believe that the charity itself was efficiently distributing its money among its various sub-goals.
Take for example the case of the Singularity Institute. If people could weight various of the SI’s projects by defining how their money should be used, some people wouldn’t support the idea of rationality camps.
And here it is useful to view the SI and FHI as two departments of the same charity. They both pursue goals that either support each other or that need to be solved at the same time.
If you were to follow Landsburgs argumentation, if you were interested in defeating hunger, you might just contribute to a project that researches certain genetic modification of useful plants. Or why not contribute to the company that tries to engineer better DNA sequencers?
My point is that the concept of a charity is an artificially created black box with the label “No User Serviceable Parts Inside” and Landsburg’s argument makes it sound like we should draw a line at that point and don’t try to give even more efficiently. I don’t see that, I am saying that in certain cases you can as well view one charity as many and two charities as one.
Just that calling charities departments doesn’t make them a single charity. They are two damn charities! Nothing more than that.
What’s the time for that in the video?
2m38s. My rapidly typed transcript:
QUESTIONER:
NICK:
If anyone’s interested in coming to future talks (Stuart has modestly omitted to mention that he is giving one on the 1st of December on AI boxes), pm me to join the mailing list.