There likely was. The SIAI also seems to have a research program outlined.
Yup. There’s a Blue Gene supercomputer that is being used to (among other things) simulate increasingly large portions of the brain at a neuronal level. That’s $100m right there, and then we can throw in the funding for pretty much all neuroanatomy research as well. I’d guesstimate the global annual budget for FAI research at $1-2m. I may be defining upload precursors more loosely than you are, so I understand your skepticism.
The majority of your post focuses on the difficulty of taskifying FAI, which makes it sound as though you’re arguing for a predetermined conclusion.
Great! :)
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money. Reading your post from that perspective, your post seemed hostile, though I realize that wasn’t intended.
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money.
Bad argument. If in fact FAI research shouldn’t be pursued, then they shouldn’t pursue it, no matter sunk cost.
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches. This is why I read multi as hostile: if FAI research is the wrong approach as he argues, then the SIAI should shut down. Which (in my head) compresses to “multi wants to shut down the SIAI.”
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches.
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.
Not surprisingly, Ernest Rutherford, Albert Einstein, and Niels Bohr regarded particle bombardment as useful in furthering knowledge of nuclear physics but believed it unlikely to meet public expectations of harnessing the power of the atom for practical purposes anytime in the near future. In a 1933 interview Rutherford called such expectations “moonshine.” Einstein compared particle bombardment with shooting in the dark at scarce birds, while Bohr, the Danish Nobel laureate, agreed that the chances of taming atomic energy were remote.
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly
small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.
Inventing the necessary technology to terraform prospective planets.
As far as I understand what multifoliaterose is scrutinizing, such an agenda is too broad and general to tackle effectively (at least so it appears to some outsiders).
There likely was. The SIAI also seems to have a research program outlined.
Yup. There’s a Blue Gene supercomputer that is being used to (among other things) simulate increasingly large portions of the brain at a neuronal level. That’s $100m right there, and then we can throw in the funding for pretty much all neuroanatomy research as well. I’d guesstimate the global annual budget for FAI research at $1-2m. I may be defining upload precursors more loosely than you are, so I understand your skepticism.
The majority of your post focuses on the difficulty of taskifying FAI, which makes it sound as though you’re arguing for a predetermined conclusion.
Great! :)
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money. Reading your post from that perspective, your post seemed hostile, though I realize that wasn’t intended.
Bad argument. If in fact FAI research shouldn’t be pursued, then they shouldn’t pursue it, no matter sunk cost.
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches. This is why I read multi as hostile: if FAI research is the wrong approach as he argues, then the SIAI should shut down. Which (in my head) compresses to “multi wants to shut down the SIAI.”
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.
Bad only if it is taken for granted that the SIAI must continue to exist.
Yes, agreed.
Thanks for the feedback.
Regarding (1): I looked up historical information on the development of the atomic bomb. According to The Manhattan Project: Making the Atomic Bomb:
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.
The space colonization analog of the SIAI research program might read like this:
Creating effective propulsion techniques to reach distant stars.
Making cryonics revival safe and effective (or else solve uploading).
Building space elevators.
Inventing the necessary technology to terraform prospective planets.
As far as I understand what multifoliaterose is scrutinizing, such an agenda is too broad and general to tackle effectively (at least so it appears to some outsiders).