I agree with the gist of this (Robin Hanson expressed similar worries), though it’s a bit of a caricature. For example:
people who really like to spend their time arguing about ideas on the Internet have managed to persuade themselves that they can save the entire species from certain doom just by arguing about ideas on the Internet
… is a bit unfair, I don’t think most SIAI folk consider “arguing about ideas on the Internet” to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It’s just that the “arguing about ideas on the Internet” is a bit more prominent because, well, it’s on the Internet :)
Eliezer, specifically, doesn’t seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don’t think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.
Yes, “arguing about ideas on the Internet” is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?
If that isn’t engineering, then what is programming (writing math that computers understand)?
that was, by these AI researchers, fleshed out mathematically
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
fleshed out mathematically to the point where they could prove it would kill off everyone?
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a “delusion box” to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?
Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden’s criticism.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it?
Surely Hutter was aware of this issue back in 2003:
Another problem connected, but possibly not limited to embodied agents, especially
if they are rewarded by humans, is the following: Sufficiently intelligent agents
may increase their rewards by psychologically manipulating their human “teachers”,
or by threatening them. This is a general sociological problem which successful AI
will cause, which has nothing specifically to do with AIXI. Every intelligence superior
to humans is capable of manipulating the latter.
I agree with the gist of this (Robin Hanson expressed similar worries), though it’s a bit of a caricature. For example:
… is a bit unfair, I don’t think most SIAI folk consider “arguing about ideas on the Internet” to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It’s just that the “arguing about ideas on the Internet” is a bit more prominent because, well, it’s on the Internet :)
Eliezer, specifically, doesn’t seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don’t think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.
Yes, “arguing about ideas on the Internet” is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?
If that isn’t engineering, then what is programming (writing math that computers understand)?
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a “delusion box” to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
BTW, I believe Carl is talking about Ring & Orseau’s Delusion, Survival, and Intelligent Agents.
Yes, thanks.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?
Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden’s criticism.
I’m not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs.
Meh, people wouldn’t have called it huge, and it isn’t, particularly. It would have signaled some positive things, but not much.
Surely Hutter was aware of this issue back in 2003:
http://www.hutter1.net/ai/aixigentle.pdf