Yes, “arguing about ideas on the Internet” is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?
If that isn’t engineering, then what is programming (writing math that computers understand)?
that was, by these AI researchers, fleshed out mathematically
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
fleshed out mathematically to the point where they could prove it would kill off everyone?
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a “delusion box” to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?
Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden’s criticism.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it?
Surely Hutter was aware of this issue back in 2003:
Another problem connected, but possibly not limited to embodied agents, especially
if they are rewarded by humans, is the following: Sufficiently intelligent agents
may increase their rewards by psychologically manipulating their human “teachers”,
or by threatening them. This is a general sociological problem which successful AI
will cause, which has nothing specifically to do with AIXI. Every intelligence superior
to humans is capable of manipulating the latter.
Yes, “arguing about ideas on the Internet” is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?
If that isn’t engineering, then what is programming (writing math that computers understand)?
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a “delusion box” to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
BTW, I believe Carl is talking about Ring & Orseau’s Delusion, Survival, and Intelligent Agents.
Yes, thanks.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?
Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden’s criticism.
I’m not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs.
Meh, people wouldn’t have called it huge, and it isn’t, particularly. It would have signaled some positive things, but not much.
Surely Hutter was aware of this issue back in 2003:
http://www.hutter1.net/ai/aixigentle.pdf