“I don’t trust my ability to set limits on the abilities of Bayesian superintelligences.”
Limits? I can think up few on the spot already.
Environment: CPU power, RAM capacity etc. I don’t think even you guys claim something as blatant as “AI can break laws of physics when convenient”.
Feats:
Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that’s not my point.
Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant’s philosophy.
Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.
Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).
I guess some people are so used to think about AI as magic omnipotent technogods they don’t even notice it. Sad.
“You can update by posting a header to all of your blog posts saying, “I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.”″
Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.