The whole SIAI project is not publicly affiliated with (as far as I’ve heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don’t have guest posts from Dr. X or Think Tank Fellow Y.
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.
At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat!
It’s not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There’s various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they’re not justified as rigorously, or haven’t met certain public standards. Why do I agree with the main post that Eliezer isn’t justified in his opinion of his own importance (and SIAI’s importance)? Because there isn’t (yet) a lot beyond speculation here.
I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it’s doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I’m likely to hold off until I see something stronger.
And I’m likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It’s not an obvious, open-and shut deal.
What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count?
I contacted Nick Bostrom about this and he said that there’s no formal relationship between FHI and SIAI.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.
It’s not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There’s various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they’re not justified as rigorously, or haven’t met certain public standards. Why do I agree with the main post that Eliezer isn’t justified in his opinion of his own importance (and SIAI’s importance)? Because there isn’t (yet) a lot beyond speculation here.
I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it’s doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I’m likely to hold off until I see something stronger.
And I’m likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It’s not an obvious, open-and shut deal.
What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?
I think they are working on their “academic credentials”:
http://singinst.org/grants/challenge
...lists some 13 academic papers under various stages of development.
Thanks for that last link. The paper on Changing the frame of AI futurism is extremely relevant to this series of posts.
I contacted Nick Bostrom about this and he said that there’s no formal relationship between FHI and SIAI.
See my comments here, here and here.