Bugmaster asked “what does the SIAI actually do?” and “what is it that you are actually working on, other than growing the SIAI itself?”
Paragraphs 2, 4, 5, 6, and 7 are lists of things that SIAI has been doing.
As for progress on FAI subproblems, that’s precisely the part we mostly haven’t written up yet, except for stuff forthcoming in the publications I mentioned, which I see as a big problem and am working to solve.
Also, I don’t think it’s the case that “most” of the research must be kept secret.
I am satisfied with the level of detail you provided for SI’s other projects. But you haven’t given even the roughest outline of SI’s progress on the thing that matters most, actual FAI research. Are these problems so complicated that you can’t even summarize them in a few sentences or paragraphs? Frankly, I don’t understand why you can’t (or won’t) say something like, “We’ve made progress on this, this, and this. Details in forthcoming publications.” Even if you were only willing to say something as detailed as, “We fixed some of the problems with timeless decision theory” or “We worked on the AI reflection problem,” that would be much more informative than what you’ve given us. Saying that you’ve done “a ton of work” isn’t really communicating anything.
Fair enough. I’ll share a few examples of progress, though these won’t be surprising to people who are on every mailing list, read every LW post, or are in the Bay Area and have regular conversations with us.
much progress on the strategic landscape, e.g. differential technological development analyses, which you’ll see in the forthcoming Anna/Luke chapter and in Nick’s forthcoming monograph, and which you’ve already seen in several papers and talks over the past couple years (most of them involving Carl).
progress on decision theory, largely via the decision theory workshop mailing list, in particular on UDT
progress in outlining the sub-problems of singularity research, which I’ve started to write up here.
progress on the value-loading problem, explained here and in a forthcoming paper by Dewey.
progress on the reflectivity problem in the sense of identifying lots of potential solutions that probably won’t work. :)
progress on the preference extraction problem via incorporating the latest from decision neuroscience
Still, I’d say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems. Three researchers can have more of an impact if they create a platform by which 20 researchers can work on the problem than if they merely do research by themselves.
Is the value-loading or value-learning problem the same thing as the problem of moral uncertainty? If no, what am I missing; if yes, why are the official solution candidates different?
Thanks, this is quite informative, especially your closing paragraph:
Still, I’d say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems.
This makes sense to me; have you considered incorporating this paragraph into your core mission statement ? Also, what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?
Also, you mentioned (in your main post) that the SIAI has quite a few papers in the works, awaiting publication; and apparently there are even a few books waiting for publishers. Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent—at least, while you wait for the meat-space publishers to get their act together ? Sorry if this is a naive question; I know very little about the publishing world.
what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?
We’re not precisely sure. It’s also a matter of funding. Researchers who can publish “platform research” for academic outreach, problem space clarification, and community building are less expensive than researchers who can solve decision theory, safe AI architectures, etc.
Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent—at least, while you wait for the meat-space publishers to get their act together ?
Like many academics, we generally do publish early drafts of forthcoming articles long before the final version is written and published. Examples: 1, 2, 3, 4.
An example: The subject matter of the second chapter of this book (the three competing systems of motivation) looks to have some implications for the value extraction problem. This is the kind of information about how our preferences work that I imagine we’ll use to extrapolate our preferences — or that an AI would use to do the extrapolation for us.
But you haven’t given even the roughest outline of SI’s progress on the thing that matters most, actual FAI research.
From what I understand they can’t do that yet. They don’t have enough people to make some actual progress on important problems. They also don’t have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.
The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.
To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues.
Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.
However, I’m not entirely convinced that hpmor.com is the best way to go about it...
Bugmaster asked “what does the SIAI actually do?” and “what is it that you are actually working on, other than growing the SIAI itself?”
Paragraphs 2, 4, 5, 6, and 7 are lists of things that SIAI has been doing.
As for progress on FAI subproblems, that’s precisely the part we mostly haven’t written up yet, except for stuff forthcoming in the publications I mentioned, which I see as a big problem and am working to solve.
Also, I don’t think it’s the case that “most” of the research must be kept secret.
I am satisfied with the level of detail you provided for SI’s other projects. But you haven’t given even the roughest outline of SI’s progress on the thing that matters most, actual FAI research. Are these problems so complicated that you can’t even summarize them in a few sentences or paragraphs? Frankly, I don’t understand why you can’t (or won’t) say something like, “We’ve made progress on this, this, and this. Details in forthcoming publications.” Even if you were only willing to say something as detailed as, “We fixed some of the problems with timeless decision theory” or “We worked on the AI reflection problem,” that would be much more informative than what you’ve given us. Saying that you’ve done “a ton of work” isn’t really communicating anything.
Fair enough. I’ll share a few examples of progress, though these won’t be surprising to people who are on every mailing list, read every LW post, or are in the Bay Area and have regular conversations with us.
much progress on the strategic landscape, e.g. differential technological development analyses, which you’ll see in the forthcoming Anna/Luke chapter and in Nick’s forthcoming monograph, and which you’ve already seen in several papers and talks over the past couple years (most of them involving Carl).
progress on decision theory, largely via the decision theory workshop mailing list, in particular on UDT
progress in outlining the sub-problems of singularity research, which I’ve started to write up here.
progress on the value-loading problem, explained here and in a forthcoming paper by Dewey.
progress on the reflectivity problem in the sense of identifying lots of potential solutions that probably won’t work. :)
progress on the preference extraction problem via incorporating the latest from decision neuroscience
Still, I’d say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems. Three researchers can have more of an impact if they create a platform by which 20 researchers can work on the problem than if they merely do research by themselves.
Thank you, this is exactly the kind of answer I was hoping for.
Is the value-loading or value-learning problem the same thing as the problem of moral uncertainty? If no, what am I missing; if yes, why are the official solution candidates different?
Thanks, this is quite informative, especially your closing paragraph:
This makes sense to me; have you considered incorporating this paragraph into your core mission statement ? Also, what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?
Also, you mentioned (in your main post) that the SIAI has quite a few papers in the works, awaiting publication; and apparently there are even a few books waiting for publishers. Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent—at least, while you wait for the meat-space publishers to get their act together ? Sorry if this is a naive question; I know very little about the publishing world.
We’re not precisely sure. It’s also a matter of funding. Researchers who can publish “platform research” for academic outreach, problem space clarification, and community building are less expensive than researchers who can solve decision theory, safe AI architectures, etc.
Like many academics, we generally do publish early drafts of forthcoming articles long before the final version is written and published. Examples: 1, 2, 3, 4.
I’d love to hear more about what areas you’re looking into within decision neuroscience.
For those who are also interested and somehow missed these:
Crash Course in Neuroscience of Motivation
and these two neuroeconomics book reviews.
An example: The subject matter of the second chapter of this book (the three competing systems of motivation) looks to have some implications for the value extraction problem. This is the kind of information about how our preferences work that I imagine we’ll use to extrapolate our preferences — or that an AI would use to do the extrapolation for us.
From what I understand they can’t do that yet. They don’t have enough people to make some actual progress on important problems. They also don’t have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.
The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.
There was a list of problems posted recently:
Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.
However, I’m not entirely convinced that hpmor.com is the best way to go about it...
It turns out that HPMOR has been great for SI recruiting and networking. IMO winners apparently read HPMOR. So do an absurd number of Googlers.