“I don’t trust my ability to set limits on the abilities of Bayesian superintelligences.”
Limits? I can think up few on the spot already.
Environment: CPU power, RAM capacity etc. I don’t think even you guys claim something as blatant as “AI can break laws of physics when convenient”.
Feats:
Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that’s not my point.
Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant’s philosophy.
Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.
Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).
I guess some people are so used to think about AI as magic omnipotent technogods they don’t even notice it. Sad.
As far as environment goes, the context says exactly the opposite of what you suggest it does.
Among your bullet points, only the first seems well-defined. I could try to discuss them anyway, but I suggest you just read up on the subject and come back. Eliezer’s organization has a great deal of research on self-understanding and theoretical limits; it’s the middle icon at the top right of the page.
“I don’t trust my ability to set limits on the abilities of Bayesian superintelligences.”
Limits? I can think up few on the spot already.
Environment: CPU power, RAM capacity etc. I don’t think even you guys claim something as blatant as “AI can break laws of physics when convenient”.
Feats:
Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that’s not my point.
Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant’s philosophy.
Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.
Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).
I guess some people are so used to think about AI as magic omnipotent technogods they don’t even notice it. Sad.
As far as environment goes, the context says exactly the opposite of what you suggest it does.
Among your bullet points, only the first seems well-defined. I could try to discuss them anyway, but I suggest you just read up on the subject and come back. Eliezer’s organization has a great deal of research on self-understanding and theoretical limits; it’s the middle icon at the top right of the page.