SIAI vs. FHI achievements, 2008-2010
After reading the FHI achievement report for 2008-2010, I thought it might be useful to compare their achievements to those of SIAI during the same time period. Since SIAI does not have an equivalent report, I’ve mostly pulled the data of their achievements from the SIAI blog.
My intention here is to help figure out which organization makes better use of my donations. For that purpose, I’m only looking at actual concrete outputs, and ignoring achievements such as successful fundraising drives or the hiring of extra staff.
For citation counts, I’m using Google Scholar data as-is. Note that this will include both self-cites and some cites from pages that really shouldn’t be counted, since Google Scholar seems to be a bit liberal about what it includes in its database. I’m unsure as to whether or not the citation counts are very meaningful, since there hasn’t been much time for anyone to cite papers published in 2010, say. But I’m including them anyway.
Future of Humanity Institute
Publications. The Achievement Report highlights three books and 22 journal articles. In addition, FHI staff has written 34 book chapters for academic volumes, including Companion to Philosophy of Technology; New Waves in Philosophy of Technology; Philosophy: Theoretical and Empirical Explorations; and Oxford Handbook of Neuroethics.
The three are the hardcover and paperback editions of Human Enhancement, as well as a paperback edition of Anthropic Bias: Observation Selection Effects in Science and Philosophy. Human Enhancement has been cited 22 times. Anthropic Bias was originally published in 2002, so I’m not including its citation count.
The highlighted 22 journal articles had been cited 59 times in total. The overwhelmingly most cited article was Cognitive Enhancement: Methods, Ethics, Regulatory Challenges in Science and Engineering Ethics, with 39 cites. The runner-up was Probing the Improbable: Methodological. Challenges for Risks with Low Probabilities and High Stake, with 5 cites. The remaining articles had 0-3 cites. But while Cognitive Enhancement is listed as a 2009 paper, it’s worth noting that the first draft version of it was posted on Nick Bostrom’s website back in 2006, and it has had time to accumulate cites since then. If we exclude it, FHI’s 2008-2010 papers have been cited 20 times.
It’s not listed in the Achievement Report, but I also want to include the 2008 Whole Brain Emulation Roadmap, which has been cited 15 times, bringing the total count (excluding Cognitive Enhancement) to 35.
Presentations. FHI members have given a total of 95 invited lectures and conference presentations.
Media appearances. Some 100 media appearances, including print, radio, and television appearances, since January 2009. These include BBC television, New Scientist, National Geographic, The Guardian, ITV, Bloomberg News, Discovery Channel, ABC, Radio Slovenia, Wired Magazine, BBC world service, Volkskrant (German newspaper), Utbildningsradion (Swedish national radio), Mehr News Agency (Iranian), Mladina Weekly (Slovenian magazine), Jyllands-Posten and Weekenavisen (Danish newspapers), Bayerisher Rundfunk (German radio), The History Channel, O Estado de São Paulo (Brazillian newspaper), Euronews, Kvallsposten (Swedish newspaper), City Helsinki (Finnish radio), Focus, Dutch Film and Television Academy, The Smart Manager (Indian magazine), Il Sole 24 Ore (Italian monthly), The Bulletin of the Atomic Sciences, Time Magazine, Astronomy Now, and Radio Bar-Kulan (Kenya).
Visitors. “The Institute receives many requests from students and scholars who wish to visit the Institute, only a few of which are accepted because of capacity limitations. The FHI has hosted a number of distinguished academic visitors over the past two years within its various areas of activity, such as Profs. David Chalmers, Michael Oppenheimer, and Thomas Homer-Dixon.”
Policy advice. The Achievement Report highlights 23 groups or events which have received policy advice from either Nick Bostrom or Anders Sandberg. These include the World Economic Forum, the Public Services Offices of the Prime Minister’s Office of Singapore, the UK Home Office, If (Stockholm insurance company), Jane Street Capital, IARPA (Intelligence Advanced Research Projects Activity) for US Government, The Swedish Institute for Infectious Disease Control, and setting up a research network, “A differential view of enhancement”, within the Volkswagen Foundation.
Organized events. Three organized events. 1: Cognitive Enhancement Workshop. 2: Symposium on cognitive enhancement and related ethical and policy issues. 3: Uncertainty, Lags and Nonlinearity: Challenges to governance in a turbulent world.
Singularity Institute
Publications. The SIAI publications page has 15 papers from the 2008-2010 period, of which 11 are listed under “recent publications”, 1 under “software”, and 3 under “talks and working papers”. Of these, Superintelligence does not imply benevolence has been cited once. The rest all have no citations.
The Sequences were written during this time period. They consist of about a million words, and might very well have a bigger impact than all the other FHI and SIAI articles together—though that’s very hard to quantify.
Presentations and Media Appearances. The SIAI blog mentions a number of media appearances and presentations at various venues, but I don’t have the energy to go through them all and count. From a quick eyeballing of the blog, though, SIAI has nowhere near as many presentations and media appearances as FHI.
Visitors. The Visiting Fellows page has a list of 27 Visiting Fellows from around the world, who attend or hold degrees from universities including Harvard, Stanford, Yale, Cambridge, Carnegie Mellon, Auckland University, Moscow Institute of Physics and Technology, and the University of California-Santa Barbara
Online communities and tools. Less Wrong was founded in 2009, and Google Analytics says that by the end of 2010, it had had over a million unique visitors.
Note that LW is an interesting case: as an FHI/SIAI collaboration, both organizations claim credit for it. However, since LW is to such a huge extent Eliezer’s creation, and I’m not sure of what exactly the FHI contribution to LW is, I’m counting it as an SIAI and not a joint achievement.
SIAI also created the Uncertain Future, a tool for estimating the probability of AI.
Organized events. SIAI held Singularity Summits on all three years. The first Singularity Summit Australia was held in 2010. In 2008, SIAI co-sponsored the Converge unconference.
Ben Goertzel, acting as the SIAI Director of Research at the time, organized the 2008 and 2009 conferences on Artificial General Intelligence. He also co-organized a 2009 workshop on machine consciousness,
Artificial Intelligence projects. SIAI provided initial funding for the OpenCog project, as well as sponsoring Google Summer of Code events relating to the project in 2008 and 2009.
Overall
Based on this data, which organization is more deserving of my money? Hard to say, especially since SIAI has been changing a lot. The general AGI research, for instance, isn’t really something that’s being pursued anymore, and Ben Goertzel is no longer with the organization. Eliezer is no longer writing the sequences, which were possibly the biggest SIAI achievement of the whole 2008-2010 period.
Still, FHI’s accomplishments seem a lot more impressive overall, suggesting that they might be a better target for the money. On the other hand, they are not as tightly focused on AI as SIAI is.
One imporant question is also the amount of funding the two organizations have had: accomplishing a lot is easier if you have more money. If an organization has thrice as much money, they should be expected to achieve thrice as much. SIAI’s revenue was $426,000 in 2008 and $628,000 in 2009. FHI’s funding was around $711,000 for 10/2008 − 10/2009. I don’t know the 2010 figure for either organization. The FHI report also says the following:
To appreciate the significance of what has been accomplished, it should be kept in mind that the FHI has been understaffed for much of this period. One of our James Martin Research Fellows, Dr Rebecca Roache, has been on maternity leave for the past year. Our newest James Martin Research Fellow, Dr Eric Mandelbaum, who was recruited from an extremely strong field of over 170 applicants, has been in post for only two months. Thus, for half of the two-year period, FHI’s research staff has consisted of two persons, Professor Nick Bostrom and Dr Anders Sandberg.
SIAI—An Examination notes that in both 2008 and 2009, SIAI paid salaries to three people, so for a while at least, the amount of full-time staff in the two organizations was roughly comparable.
- 11 May 2012 19:26 UTC; 56 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- Room for more funding at the Future of Humanity Institute by 16 Nov 2012 20:45 UTC; 26 points) (
- Kaj Sotala’s Posts by 10 Sep 2012 21:58 UTC; 15 points) (
- 22 Jan 2012 5:14 UTC; 10 points) 's comment on New x-risk organizations by (
- 5 Dec 2014 10:30 UTC; 1 point) 's comment on Open Thread 6 by (EA Forum;
I see Singularity Institute and Future of Humanity Institute as quite nicely complementary:
FHI is part of Oxford, and thus can add great credibility and funding to existential risk reduction. Resulting output: lots of peer-reviewed papers, books from OUP like Global Catastrophic Risks, conferences, media appearances, etc.
SI is independent and is less constrained by conservatism or the university system. Resulting output: Foundational works on Friendly AI (e.g., CFAI was just so much more advanced than anything else in machine ethics in 2001 it’s not even funny), and the ability to do weird things that are nevertheless quite effective at creating tons of new people interested in rationality and existential risk reduction: (1) The Sequences, the best tool I know for creating rational transhumanists, (2) Harry Potter and the Methods of Rationality, now the most-reviewed fan fiction work of all time on FanFiction.net, (3) Less Wrong in general, a growing online and meatspace community of people who care about rationality and x-risk reduction, (4) Singularity Summit, a mainstream-aimed conference that nevertheless brings in people who end up making significant academic contributions like David Chalmers, along with a broader support base for transhumanism in general, (5) lots of behind-the-scenes work with Giving What We Can and 80,000 Hours that has encouraged the optimal philanthropy community in general to take existential risk reduction seriously.
FHI is in Britain, SI is in the USA.
FHI and SI have different but overlapping goals. FHI investigates a much broader range of transhumanist topics, whereas SI is more focused on AI risks and the rationality skills needed to think correctly about them.
I’m currently glad I work for the org with greater flexibility (SI), but I am quite happy with FHI’s rather incredible productivity. I don’t know any other philosophy research institute that has published so much original and important work in such a short time, ever.
edit 12/01/2011: Added point (5) in the list above, about optimal philanthropy.
Small correction: HP:MoR is the most reviewed Harry Potter fanfiction of all time. I did a quick search for Twilight fanfiction and found a story with 14,822 reviews compared with 14,710 reviews for HP:MoR. I wouldn’t be surprised if that were the number to beat (twilight is a very popular fanfiction topic as far as I know) but due to the disorganization of fanfiction.net, it’s hard to say.
Unexpected Circumstances—Reviews: 23,004.
Parachute—Reviews: 16,633.
Fridays at Noon—Reviews 15,162.
I found these using this search.
So MoR is now past both Parachutes and Fridays at Noon. Still needs to pass Unexpected Circumstances (now at 24,163). When will it pass? Well:
MoR: 18911 02-28-10; 992 days ago:
18911/992=19.1
reviews per day Status: In ProgressUC: 24163 11-22-10; 725 days ago:
24163/725=33.3
reviews per day Status: CompleteObviously if UC doesn’t see its review rate decline, it’ll never be passed by MoR; let’s assume it goes to zero (not too unrealistic since UC is finished and MoR is not), how many days will it take MoR to catch up?
(24163-18911) / (18911/992) = 275.5
.So barring a major acceleration in MoR reviews, we can’t expect MoR to pass UC within much less than a year, but it’s not too unlikely to pass in a full year (November 2013) and in 2 years I’d consider it likely to highly-likely to pass.
One measure of a multi-chapter fanfic quality is reviews/chapter, and I don’t think that HPMOR is likely to catch up to UC in the near future by that metric.
Well, one can argue that reviews/chapter isn’t necessarily the best metric, but you’re right: since UC is currently less than half the length of MoR and still has more reviews, it will be a long time before reviews/chapter catches up if ever.
After substantial additional work ( http://www.gwern.net/hpmor#the-review-race-unexpected-circumstances-versus-mor ), I’ve concluded that omitting UC’s reviews is a drastic simplification and when one takes into account the slow accretion, MoR is more likely to catch up in <5 years than <2 years.
I’m sure Eliezer is writing Twilight fanfiction as we speak. It’s an untapped market and would help with the demographics!
Any suggestions for the title?
...how about Luminosity?
FHI is part of a prestigious university. This has signaling value in many areas: Adding credibility to the message, recruitment, and fundraising.
FHI also has raised a lot more money.
You might also compare what they have accomplished given their level of resources—where you get more bang for the buck.
FHI and SIAI have similar but different goals/subgoals. Depending on how one prioritizes these, direct comparisons are not always relevant.
It is also possible, though I am not quite sure of this, that FHI and SIAI are complementary. It may be that FHI uses “establishment” resources like the university affiliation, but is forced to follow a more conservative path; while SIAI is more financially constrained and lacks the prestige, but in exchange is permitted to take radical new directions.
Do you know where to find the figures for how much money FHI has?
In the annual report for 01 October 2008 − 30 September 2009 they write: ( at page 23)
“At the present time, approximately half of our budget comes from the Universityʹs James Martin 21st Century School, and approximately half comes from a few visionary philanthropists. The following donations were received during the last academic year: Philanthropist #1: £161,308 Philanthropist #2: £9,392 Philanthropist #3: £13,435 Bright Horizons Foundation: £46,072 Other donations: £780”
So they got around £231,000 from donations. With the budget from the James Martin School this should sum up to approximately £462,000 which equals around $711,000 for 10/2008 − 10/2009.
I don’t know the budget for the year 2010. (As a sidenote, the budget of FHI for 11/2005- 11/2006 was £203,665. And for 11/2006 − 11/2007 it was £263,113. (page 77). Maybe we can extrapolate from that data? )
Comparing two organizations with the same budget isn’t necessarily fair. If two organizations both had a budget of $500,000, and achieved the same amount of visible progress, but one of them had to spend $250,000 on fundraising, then (assuming linear returns to money) we should expect marginal donations to the latter organization to be twice as effective, right?
Great, thanks!
One slight problem I see is that several FHI staff don’t seem to be focused on existential-risk-reduction. Therefore the overall number of citations could be misleading.
FHI consists of 3 regular research staff members: Nick Bostrom, Anders Sandberg and Stuart Armstrong. Bostrom and Sandberg are of course very cool. I don’t know much about Armstrong.
But furthermore FHI employs 5 research associates, who apparently aren’t that interested in existential risk reduction:
Former research associate Eric Mandelbaum is mainly interested in philosophy of mind and writes papers like: Locke’s Answer to Molyneux’s Thought Experiment. Other publications of Mandelbaum are also rather ivory-tower.
Similar things can be said for former research associate Rebecca Roache.
The work of Milan Ćirković seems mainly focused on theoretical physics.
But I don’t know how much money research associates get and how much money the normal research staff receives, so maybe donating to FHI is the more effective x-risk-reduction-strategy after all. ( E.g. the budget for Rebecca Roache was almost as high as that for Nick Bostrom back in 2005-2007 (page 77). )
See also this subthread.
Thanks for the pointer! For those too lazy to click on it, Nick Bostrom comments in that thread:
and
There are also several informative comments from Carl Shulman, who mentions a cost of about $200k per 2 year postdoc, and estimates FHI getting something like 1 Sandberg or Ord equivalent per 2-3 hires.
Page 4 of the 2008-2009 annual report says that research associates are unsalaried.
Thanks!
But then how did FHI spend £460,000 in 2008-2009 ? ( See this comment ) The salary for James Martin research fellows and is around £45,000, and for Director Nick Bostrom around £50,000 according to the page 77 of this document.. And for James Martin project officers it’s around £20,000. Thus the overall salary budget is approximately £180,000. So there remain around £280,000.
Is it possible that FHI just doesn’t spend it’s whole budget? E.g. in 2006- 2007 their budget was £263,113 but their actual expenditure was only £135,815! And who gets the surplus? Can FHI effectively use that much more money?
FHI must pay for:
“taxes” to the university and the department for use of facilities, perhaps including high rent for the office space in the philosophy building (they also get a cut of many grants)
substantial costs for conferences and workshops
travel costs for staff and perhaps visitors
non-salary compensation (pension contributions, perhaps employer payroll taxes, etc for staff
Odd.
FHI not spending all its budjet seems unlikely, since the comments in the subthread steven0461 linked are saying that FHI would hire more staff if only it had the money.
It’s really odd, maybe I’m misreading the budget tables.
The research associate positions are not paid, as far as I know (e.g. Robin draws his salary from GMU, Toby from an Oxford college where he taught, etc). In some cases, in fact, the financial flow goes the other way.
However, Eric Mandelbaum was a paid postdoc at FHI before he left for a regular philosophy job.
Another thing to consider is what each organization plans to do with more money. SIAI’s list of things they want more money for (last two pages of this) looked pretty good. No idea whether FHI has published something similar.
They want to hire more research staff. Last I heard $200,000 buys a 2 year appointment for a postdoc, including all salary and benefits, office space and support services, travel costs, etc. You can get a sense of the marginal hire by looking at past folk like Rafaela, Rebecca, Stuart, and Eric. From an existential risk point of view, some hits and some misses.
This is more of an input than an output (achievement). I mean, you would have just as much an achievement if all the people there twiddled their thumbs all day.
This analysis seems accurate. One thing to note regarding organized events is that the Singularity Summits seem to be a lot more successful than the FHI’s comparatively small conferences.
They are different things completely. The former are for spreading a message widely to a popular audience, and the latter are academic conferences.
That’s a good point. They aren’t necessarily comparable.
Surely HPMOR counts, too?
The advantage of academic papers over HPMoR and the sequences is that other people can cite academic papers and be taken seriously.
EDIT: Again, the influence of HPMoR and the sequences is horrifically terrible to quantify. Perhaps there’s some availability bias going on here in that citations are easier to count than converts.
No MoR than the Sequences, I’d guess.
It’s interesting: FHI appears to have started with academic respect and moved towards existential risk problems, whereas SIAI seems to have started with existential risk problems and is moving towards academic respect. Time will tell which method is more effective overall—but one idea that occurs out of this analysis is when SIAI spin off the rationality training, perhaps they should also spin off the existential risk work to the FHI.
Fanciful, but I could see a Singularity Institute Rationality College feeding bright researchers into a SI+FHI collaboration on the field of existential risk and similar, and feeding bright mathematics types into the SIAI’s work on friendly AI theory. The work on FAI gets done, more of the students of the rationality dojo have success (improving the status of the rationality dojo), and more papers and citations come SI’s way (improving the academic status of the Institute). This also allows the Singularity Institute’s AI-focused researchers to contribute to existential risk papers and get their names in journals without taking large amounts of their time.
Primarily endorsement, a link, and Robin Hanson’s writing on Overcoming Bias (and he is an unpaid Research Associate).
Does it say anything about funding in here? If one has significantly more funding than the other this isn’t really helpful.
I just added a bit about the finances at the end. I’m not sure of where to find information about FHI’s funding.
I’m not sure where to get it either. England doesn’t seem to have the same non-profit disclosure requirements as America, so I couldn’t find any Form 990s or equivalents thereof back when I looked.
If you’re trying to choose which charity to donate to, and you don’t donate to other charities, I suggest not even making this decision, and dividing your donations between them, to avoid the problem described in a recent post.
I’m confused: what do you think stops the standard argument against multiple charities from working here?
If the cognitive dissonance caused by having to decide between two charities is too big you might end up not donating at all or much later. In that case it is better to split your money as to donate anything at all until you are able to resolve which charity is best.
I thought I might send some to FHI, but found out that the only direct method of doing so seemed to be limited to people in Great Britain, and that in the US, I would have to donate to a separate organization and do a “write-in” to have it assigned, which did not leave me confident my funds would reach them. At the time, I called it a trivial inconvenience allowing me to easily leave my current donation plans in place.
So much for “what currently appears to be the world’s leading transhumanist organization.”
I really would like people closer to the SIAI to think before they make claims like that.
Of course I’m aware of FHI. When I made that statement I was thinking of independent organizations, not university branches. In any case, I’m happy to clarify the wording in the original post.
Of course I’m aware you’re aware of the FHI.
I’m grateful that this didn’t devolve into a drama festival. A portion of my faith in LW has been restored.
This analysis would seem to indicate that SIAI is at least in a tie for the world’s leading transhumanist organization. What’s your objection to the claim?
Even without contesting your framing, it doesn’t support the claim that SingInst “appears to be the leading organization”. Notice when you are primarily defending a position without paying sufficient attention to details of the argument.
“At least in a tie” by what metric? Which metrics matter? That’s a tarpit I’m not interested in pursuing, because it doesn’t sound productive to engage in it.
My objection is the same fork that we fell into before during the “rationality mini-camp was a success” fiasco: if there exists evidence, give it; otherwise, if only weak evidence exists, one ought to favor precision over self-aggrandizement.
Well, there are two major (EDIT: x-risk related; see discussion below) transhumanist organizations. This article provides a bunch of possible metrics for comparing them (cited papers, audience, publicity, achievements per dollar...).
It seems that you routinely post declaring that you are skeptical of SI’s claims. That’s good—we need loud contrarians sometimes. But I haven’t seen you specify what evidence you’re looking for that would resolve your skepticism.
Declaring yourself to be skeptical, without explaining what evidence you are looking for, just doesn’t seem to contribute to the debate all that much.
Or, in other words: I can’t figure out how much to update on your skepticism, because I can’t figure out what you’re skeptical about! You should consider writing a post like a few of XiXiDu’s explaining what you’re looking for.
There are two major x-risk -related transhumanist organizations. If you’re counting major transhumanist organizations in general, you definitely need to include the SENS Foundation and the Methuselah Foundation as well.
Upvoted, and edited to clarify.
For me personally, it seems fairly clear that x-risk oriented organizations are the most important ones in the field of philanthropy. That’s why I take the FHI/SI debate so seriously, and that’s why I asked why paper_machine felt that the claim was obviously off base. Are the other organizations worth looking into seriously as optimal philanthropy?
Yes, and it seems from Kaj’s data that by most of the obvious metrics, FHI is beating SI by a lot. One might think that the SI is doing well for publicity but that’s primarily through the Summits. The media coverage is clearly much larger for the FHI than the SI. Moreover, the media coverage for the summits frequently focused on Kurzweil style singularities and similar things.
The last Summit I was at, the previous NYC one, had an audience of close to a thousand that was relatively savvy and influential (registration records and other info show a lot of VCs, scientists, talented students, entrepreneurs and wealthy individuals) who got to see more substantive talks, even if those were not picked up as much by the media.
Also, one should apply the off-topic correction evenly: a fair amount of FHI media attention is likewise grabbing a quote on some fairly peripheral issue.
Interesting. That seems to be a strong argument to update more towards the SI.
I feel like it’s to some extent an apples-to-oranges comparison. FHI is obviously doing better in terms of academic credibility, as measured by citations and their smaller academic conferences; SI seems to be doing much better in mass-audience publicity, as measured by 1 million unique visitors to LessWrong (!) and the Singularity Summit, which is growing each year and has a dramatically larger and wider audience than any FHI conferences. The Visiting Fellows program also stood out as something SI was doing much better.
I don’t really see why that’s a “but”. Is it because the media focuses on Kurzweilian Singularities?
I would love to see a financial analysis of FHI along the lines of this one, to evaluate the achievements/dollars metric.
But based on the metrics we have, the only one which seems decisively in FHI’s favor is citations.
This article caused me to update in favor of “FHI is the best transhumanist charity”, but only marginally so. If your interpretation was stronger than that, I’d be interested in hearing why.
I’m not I’m thinking in terms of “best transhumanist charity” which seems to be very hard to define. . I’d say more that I’m thinking in terms of something like “which of these two charities is a most efficient use of my resources, especially in regards to reducing existential risk, and encouraging the general improvement of humanity.”
If I were attempting to think about this in terms of a very broad set of “transhumanist” goals, then I’d point to the the citations and the large number of media appearances as aspects where the FHI seems to be doing much more productively than the SI. Citations are an obviously important metric- most serious existential risk issues require academics to pay attention otherwise what you do won’t matter much. Media attention is more important in getting people to realize that a) things like death really are as bad as they seem and b) that we might be able to really do something about them.
The primary reason I haven’t updated that much in the direction of the FHI is that I don’t know how much money the FHI is using to get these results, and also how much of this would be work that academics would do anyways. (When academics become affiliated with an institution they often do work they would already do and just tweak it to fit the institutions goals a bit more.) .
The evidence I am looking for won’t be available until it is too late, that’s the problem. I have a hard time to swallow that pill. I also don’t trust my rationality enough yet to completely overpower my intuition on that subject. Further, I feel that my background knowledge and math skills are not yet sufficient to actually donate larger amounts of money to the Singularity Institute. I am trying to change that right now, I am almost at Calculus over at Khan Academy (after Khan Academy I am going to delve into Bayesian probability).
I’m curious why you think you need calculus to evaluate which charities to donate to. (Though I wholeheartedly approve of learning it).
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”, even if we won’t know many things until after a Singularity occurs/fails to occur. For example, I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream, since that’s something which seems to be an important accomplishment which I am not qualified to independently evaluate.
I don’t really know what exactly I will need beforehand so I decided to just acquire a general math education. Regarding calculus in particular, in a recent comment someone wrote that you need it to handle a probability distribution.
What evidence would cause me to update in favor of “Otto Rössler knows what he’s talking about regarding risks associated with particle collision experiments”? I have no idea, I don’t even know enough about high energy physics to tell what evidence could convince me one way or the other, let alone judge any evidence. And besides, the math that would be necessary to read papers about high energy physics is ridiculously far above my head. And the same is true for artificial general intelligence, just that it seems orders of magnitude more difficult and that basically nobody knows anything about it.
That says little about their claims regarding risks from AI in my opinion.
I would imagine that the validity of SI’s claims in one area of research is correlated with the validity of their claims in other, related areas (like decision theory and recursively self-improving AI).
Depending on what you mean by “major”, I suppose. My first Google hit for “transhumanism” is Humanity+, and though I know nothing about them they at least seem to be in the same category. There’s also the much maligned Lifeboat Foundation, which could hypothetically count as a transhumanist organization. So there’s two more after five minutes of Googling.
That’s certainly not my MO for participating in LW. I’m a mathematician, not a loud contrarian. I’m currently working on summarizing Pearl’s work on causality so that people can make more sense out of the sequences, and therefore help more when Luke’s polyethics project gets off the ground.
Unfortunately, I’m also studying for my prelims, and so progress on Pearl has been a bit slow.
As I explained earlier, any hypothetically available evidence would have to be judged relative to some metric, and it’s not worthwhile to sit around and discuss which metrics are optimal, particularly when nobody is in an impartial position to do so.
You’re right. I guess I’m using the same metric as JoshuaZ (who’s the best marginal use of my dollars?) and I’m fairly convinced that’s existential risk, so I was discounting several non x-risk focused transhumanist charities, perhaps unfairly.
And apologies for implying you were solely a “loud contrarian”. I didn’t mean to imply that was your primary purpose for posting on LessWrong, just that I’d noticed many comments by you recently along those lines, and I was having difficulty interpreting your skepticism.
1) If it’s hard for you to figure out which organization is making better use of your money what encourages you to continue donating?
2) What can be the reasons that may force you to halt your donations to these organizations in the future ?
3) Are you confidant/sure that each penny of your donations is getting converted into fruitful actions that will bring back substantial positive returns to the organizations in the future ?
4) How much are you currently donating to both organizations and to which organization would you like to increase your donations ?