All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal’s original wager, the Thor and other deities are to be ignored by omission.
On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.
On the rationality movement, here’s a quote from Holden.
Apparent poorly grounded belief in SI’s superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.
Yet I’m not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it’s not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real).
As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as “a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise”), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
It’s a crooked game, but it’s the only game in town?
None of that is evidence that SI would be more effective if it had more money. Assign odds to hostile AI becoming extant given low funding for SI, and compare the odds of hostile AI becoming extant given high funding for SI. The difference between those two is proportional to the value of SI (with regards to preventing hostile AI).
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.
With regards to the ‘message’, i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to “rationally discussing”, what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it’s first 10 years and first over 2 millions dollars in other people’s money.
Note that that second paragraph is one of Holden Karnofsky’s objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
Yes. I am sure Holden is being very polite, which is generally good but I’ve been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The ‘resistance to feedback’ is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won’t pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.
All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal’s original wager, the Thor and other deities are to be ignored by omission.
On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.
On the rationality movement, here’s a quote from Holden.
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it’s not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real).
As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as “a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise”), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
It’s a crooked game, but it’s the only game in town?
None of that is evidence that SI would be more effective if it had more money. Assign odds to hostile AI becoming extant given low funding for SI, and compare the odds of hostile AI becoming extant given high funding for SI. The difference between those two is proportional to the value of SI (with regards to preventing hostile AI).
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.
With regards to the ‘message’, i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to “rationally discussing”, what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it’s first 10 years and first over 2 millions dollars in other people’s money.
Note that that second paragraph is one of Holden Karnofsky’s objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
Yes. I am sure Holden is being very polite, which is generally good but I’ve been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The ‘resistance to feedback’ is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won’t pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.