Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective “I subsequently think ‘yeah, that seemed well-reasoned’” criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I’ll pay triple.)
(Re-reading this, I notice that my “reasons things didn’t seem well-reasoned” tend to look like counterarguments, which isn’t always the core of it—it is sometimes, sadly, vibes-based. And, of course, I don’t think that if I have a counterargument then something isn’t well-reasoned—the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it’s hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.)
No bounty: I didn’t wind up thinking this was well-reasoned.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it’s probably technically correct that no AI can do anything I couldn’t do given enough time, time is really important, as your next link points out!
No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: “‘smarter than humans’ is a meaningless concept”—so is ‘smarter than a smallpox virus,’ but look what happened there; “Dimensions of intelligence are not infinite … Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?”—compare me to John von Neumann! I am not near the maximum.)
No bounty! (Reasoning: the core argument seems to be on page 4: paraphrasing, “here are four ways an AI could become smarter; here’s why each of those is hard.” But two of those arguments are about “in the limit” with no argument we’re near that limit, and one argument is just “we would need to model the environment,” not actually a proof of difficulty. The ensuing claim that getting better at prediction is “prohibitively high” seems deeply unjustified to me.)
No bounty! (Reasoning: the core argument seems to be that (a) there will be problems too hard for AI to solve (e.g. traveling-salesman). (Then there’s a rebuttal to a specific Moore’s-Law-focused argument.) But the existence of arbitrarily hard problems doesn’t distinguish between plankton, lizards, humans, or superintelligent FOOMy AIs; therefore (unless more work is done to make it distinguish) it clearly can’t rule out any of those possibilities without ruling out all of them.)
(It’s costly for me to identify my problems with these and to write clear concise summaries of my issues. Given that we’re 0 for 4 at this point, I’m going to skim the remainder more casually, on the prior that what tickles your sense of well-reasoned-ness doesn’t tickle mine.)
No bounty! (Reasoning: “Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.” Again, compare me to von Neumann! Compare von Neumann to a von Neumann who can copy himself, save/load snapshots, and tinker with his own mental architecture! “Complex minds are likely to have complex motivations”—but instrumental convergence: step 1 of any plan is to take over the world if you think you can. I know I would.)
No bounty! (Reasoning: “Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?” As above, step 1 is to take over the world. Also makes the “intelligence is multidimensional” / “intelligence can’t be infinite” points, which I describe above why they feel so unsatisfying.)
Thanks, I knew I was outmatched in terms of specialist knowledge, so I just used Metaphor to pull as many matching articles that sounded somewhat reasonable as possible before anyone else did. Kinda ironic the bounty was awarded for the one I actually went and found by hand. My median EV was $0, so this was a pleasant surprise.
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective “I subsequently think ‘yeah, that seemed well-reasoned’” criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I’ll pay triple.)
(Re-reading this, I notice that my “reasons things didn’t seem well-reasoned” tend to look like counterarguments, which isn’t always the core of it—it is sometimes, sadly, vibes-based. And, of course, I don’t think that if I have a counterargument then something isn’t well-reasoned—the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it’s hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.)
In the order I read them:
No bounty: I didn’t wind up thinking this was well-reasoned.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I’ll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it’s probably technically correct that no AI can do anything I couldn’t do given enough time, time is really important, as your next link points out!
No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: “‘smarter than humans’ is a meaningless concept”—so is ‘smarter than a smallpox virus,’ but look what happened there; “Dimensions of intelligence are not infinite … Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?”—compare me to John von Neumann! I am not near the maximum.)
No bounty! (Reasoning: the core argument seems to be on page 4: paraphrasing, “here are four ways an AI could become smarter; here’s why each of those is hard.” But two of those arguments are about “in the limit” with no argument we’re near that limit, and one argument is just “we would need to model the environment,” not actually a proof of difficulty. The ensuing claim that getting better at prediction is “prohibitively high” seems deeply unjustified to me.)
No bounty! (Reasoning: the core argument seems to be that (a) there will be problems too hard for AI to solve (e.g. traveling-salesman). (Then there’s a rebuttal to a specific Moore’s-Law-focused argument.) But the existence of arbitrarily hard problems doesn’t distinguish between plankton, lizards, humans, or superintelligent FOOMy AIs; therefore (unless more work is done to make it distinguish) it clearly can’t rule out any of those possibilities without ruling out all of them.)
(It’s costly for me to identify my problems with these and to write clear concise summaries of my issues. Given that we’re 0 for 4 at this point, I’m going to skim the remainder more casually, on the prior that what tickles your sense of well-reasoned-ness doesn’t tickle mine.)
No bounty! (Reasoning: “Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.” Again, compare me to von Neumann! Compare von Neumann to a von Neumann who can copy himself, save/load snapshots, and tinker with his own mental architecture! “Complex minds are likely to have complex motivations”—but instrumental convergence: step 1 of any plan is to take over the world if you think you can. I know I would.)
No bounty! (Reasoning: has an alien-to-me model where AI safety is about hardcoding ethics into AIs.)
No bounty! (Reasoning: “Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?” As above, step 1 is to take over the world. Also makes the “intelligence is multidimensional” / “intelligence can’t be infinite” points, which I describe above why they feel so unsatisfying.)
No bounty! Too short, and I can’t dig up the primary source.
Bounty! I haven’t read it all yet, but I’m willing to pay out based on what I’ve read, and on my favorable priors around Katja Grace’s stuff.
Thanks, I knew I was outmatched in terms of specialist knowledge, so I just used Metaphor to pull as many matching articles that sounded somewhat reasonable as possible before anyone else did. Kinda ironic the bounty was awarded for the one I actually went and found by hand. My median EV was $0, so this was a pleasant surprise.