It’s 1939. No one knows if it is possible to split an atom, no one knows how to split an atom in a controlled fashion, and no one knows how to use the splitting of an atom as a weapon capable of exterminating hundreds of thousands of civilians. There are no clear approaches to these problems.
Six years later, hundreds of thousands of civilians have been exterminated by the weaponized splitting of atoms.
A project that currently appears to be intractable may not remain so. If trustworthy upload technology were available to us today, then it would probably be a good idea to use it to develop FAI. But we don’t have it yet, so if there is even a small chance that useful work will be accomplished by meat-brains we might as well take it.
We shouldn’t neglect the development of upload technology and its precursors, but those fields are already receiving three or four orders of magnitude more funding and attention than FAI. It’s clear where the marginal benefits are.
I’m saddened to see that your bottom line hasn’t changed as a result of your series of posts on SIAI’s PR.
I don’t understand your comment. Surely your estimate of the usefulness of FAI work should depend on your estimate of its chances of success—not just its importance—because otherwise praying to Zeus would look even more attractive than developing FAI. Multi was pointing out that we don’t seem to have any viable way of attacking the problem right now.
Was the state of nuclear science really so primitive in 1939 that there wasn’t a discernable research program?
Is it really the case that upload precursors have 3-4 orders of magnitude more funding than FAI research?
In my view whether or not the marginal benefit is in FAI depends on whether there people have potentially fruitful ideas on the subject. My post inquires about whether people have such ideas
I’m still open to changing my mind.
My position on SIAI has changed since August; I have a more favorable impression now; the question is just what the optimal strategy is for the organization to pursue.
There likely was. The SIAI also seems to have a research program outlined.
Yup. There’s a Blue Gene supercomputer that is being used to (among other things) simulate increasingly large portions of the brain at a neuronal level. That’s $100m right there, and then we can throw in the funding for pretty much all neuroanatomy research as well. I’d guesstimate the global annual budget for FAI research at $1-2m. I may be defining upload precursors more loosely than you are, so I understand your skepticism.
The majority of your post focuses on the difficulty of taskifying FAI, which makes it sound as though you’re arguing for a predetermined conclusion.
Great! :)
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money. Reading your post from that perspective, your post seemed hostile, though I realize that wasn’t intended.
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money.
Bad argument. If in fact FAI research shouldn’t be pursued, then they shouldn’t pursue it, no matter sunk cost.
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches. This is why I read multi as hostile: if FAI research is the wrong approach as he argues, then the SIAI should shut down. Which (in my head) compresses to “multi wants to shut down the SIAI.”
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches.
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.
Not surprisingly, Ernest Rutherford, Albert Einstein, and Niels Bohr regarded particle bombardment as useful in furthering knowledge of nuclear physics but believed it unlikely to meet public expectations of harnessing the power of the atom for practical purposes anytime in the near future. In a 1933 interview Rutherford called such expectations “moonshine.” Einstein compared particle bombardment with shooting in the dark at scarce birds, while Bohr, the Danish Nobel laureate, agreed that the chances of taming atomic energy were remote.
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly
small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.
Inventing the necessary technology to terraform prospective planets.
As far as I understand what multifoliaterose is scrutinizing, such an agenda is too broad and general to tackle effectively (at least so it appears to some outsiders).
It’s true that in 1939 they didn’t know how to split an atom. They also didn’t know how to teleport, or travel backward in time, or do many other dangerous things. Should they have worried about those, too? What percentage of futuristic technologies ever gets developed? What percentage gets developed soon? It might be rational to worry about an unknown future, but it’s irrational to worry about one specific scenario of doom unless you have lots of evidence that it will in fact happen.
It’s 1939. No one knows if it is possible to split an atom, no one knows how to split an atom in a controlled fashion, and no one knows how to use the splitting of an atom as a weapon capable of exterminating hundreds of thousands of civilians. There are no clear approaches to these problems.
Six years later, hundreds of thousands of civilians have been exterminated by the weaponized splitting of atoms.
A project that currently appears to be intractable may not remain so. If trustworthy upload technology were available to us today, then it would probably be a good idea to use it to develop FAI. But we don’t have it yet, so if there is even a small chance that useful work will be accomplished by meat-brains we might as well take it.
We shouldn’t neglect the development of upload technology and its precursors, but those fields are already receiving three or four orders of magnitude more funding and attention than FAI. It’s clear where the marginal benefits are.
I’m saddened to see that your bottom line hasn’t changed as a result of your series of posts on SIAI’s PR.
I don’t understand your comment. Surely your estimate of the usefulness of FAI work should depend on your estimate of its chances of success—not just its importance—because otherwise praying to Zeus would look even more attractive than developing FAI. Multi was pointing out that we don’t seem to have any viable way of attacking the problem right now.
Was the state of nuclear science really so primitive in 1939 that there wasn’t a discernable research program?
Is it really the case that upload precursors have 3-4 orders of magnitude more funding than FAI research?
In my view whether or not the marginal benefit is in FAI depends on whether there people have potentially fruitful ideas on the subject. My post inquires about whether people have such ideas
I’m still open to changing my mind.
My position on SIAI has changed since August; I have a more favorable impression now; the question is just what the optimal strategy is for the organization to pursue.
There likely was. The SIAI also seems to have a research program outlined.
Yup. There’s a Blue Gene supercomputer that is being used to (among other things) simulate increasingly large portions of the brain at a neuronal level. That’s $100m right there, and then we can throw in the funding for pretty much all neuroanatomy research as well. I’d guesstimate the global annual budget for FAI research at $1-2m. I may be defining upload precursors more loosely than you are, so I understand your skepticism.
The majority of your post focuses on the difficulty of taskifying FAI, which makes it sound as though you’re arguing for a predetermined conclusion.
Great! :)
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money. Reading your post from that perspective, your post seemed hostile, though I realize that wasn’t intended.
Bad argument. If in fact FAI research shouldn’t be pursued, then they shouldn’t pursue it, no matter sunk cost.
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches. This is why I read multi as hostile: if FAI research is the wrong approach as he argues, then the SIAI should shut down. Which (in my head) compresses to “multi wants to shut down the SIAI.”
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.
Bad only if it is taken for granted that the SIAI must continue to exist.
Yes, agreed.
Thanks for the feedback.
Regarding (1): I looked up historical information on the development of the atomic bomb. According to The Manhattan Project: Making the Atomic Bomb:
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.
The space colonization analog of the SIAI research program might read like this:
Creating effective propulsion techniques to reach distant stars.
Making cryonics revival safe and effective (or else solve uploading).
Building space elevators.
Inventing the necessary technology to terraform prospective planets.
As far as I understand what multifoliaterose is scrutinizing, such an agenda is too broad and general to tackle effectively (at least so it appears to some outsiders).
It’s true that in 1939 they didn’t know how to split an atom. They also didn’t know how to teleport, or travel backward in time, or do many other dangerous things. Should they have worried about those, too? What percentage of futuristic technologies ever gets developed? What percentage gets developed soon? It might be rational to worry about an unknown future, but it’s irrational to worry about one specific scenario of doom unless you have lots of evidence that it will in fact happen.