It’s not something you can ever come close to competing with by a philosophy invented from scratch.
I don’t understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean “compete” in the sense of providing the most social good. Or something else?
I stand by my comment that “AGI” and “friendliness” are hopelessly anthropomorphic, infeasible, and/or vague.
I disagree with “hopelessly” “anthropomorphic” and “vague”, but “infeasible” I may very well agree with, if you mean something like it’s highly unlikely that a human team would succeed in creating a Friendly AGI before it’s too late to make a difference and without creating unacceptable risk, which is why I advocate more indirectmethods of achieving it.
Computer “goals” are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts.
People are trying to design such algorithms, things like practical approximations to AIXI, or better alternatives to AIXI. Are you saying they should refrain from using the word “goals” until they have actually come up with concrete designs, or what? (Again I don’t advocate people trying to directly build AGIs, Friendly or otherwise, but your objection doesn’t seem to make sense.)
It’s not something you can ever come close to competing with by a philosophy invented from scratch.
I don’t understand what you mean by this.
A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.
This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.
One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level (history and the evolution of institutions) are completely inacessible to the powers of the lesser level (the human). (The machines representing the two levels in this case might be “the mental states accessible to a single armchair philosophy community”, or, alternatively, “fledgling AI which, per a priori economic intuition, has no advantage over a few philosophers”, versus “the physical states accessible in human history”.)
This belief of his might be charged with a sort of independent half-intuitive aversion to making the sorts of (frequently catastrophic) mistakes that are routinely made by people who think they can metaphorically breach this complexity barrier. One effect of such an aversion would be that he would intuitively anticipate that he would always be, at least in expected value, wrong to agree with such people, no matter what arguments they could turn out to have. That is, it wouldn’t increase his expected rightness to check to see if they were right about some proposed procedure to get around the complexity barrier, because, intuitively, the prior probability that they were wrong, the conditional probability that they would still be wrong despite being persuasive by any conventional threshold, and the wrongness of the cost that had empirically been inflicted on the world by mistakes of that sort, would all be so high. (I took his reference to Hayek’s Fatal Conceit, and the general indirect and implicitly argued emotional dynamic of this interaction, to be confirmation of this intuitive aversion.) By describing this effect explicitly, I don’t mean to completely psychologize here, or make a status move by objectification. Intuitions like the one I’m attributing can (and very much should!), of course, be raised to the level of verbally presented propositions, and argued for explicitly.
(For what it’s worth, the most direct counter to the complexity argument expressed this way is: “with enough effort it is almost certainly possible, even from this side of the barrier, to formalize how to set into motion entities that would be on the other side of the barrier”. To cover the pragmatics of the argument, one would also need to add: “and agreeing that this amount of effort is possible can even be safe, so long as everyone who heard of your agreement was sufficiently strongly motivated not to attempt shortcuts”.)
Another, possibly overlapping reason would have to do with the meta level that people around here normally imagine approaching AI safety problems from—that being, “don’t even bother trying to invent all the required philosophy yourself; instead do your best to try to formalize how to mechanically refer to the process that generated, and could continue to generate, something equivalent to the necessary philosophy, so as to make that process happen better or at least to maximally stay out of its way” (“even if this formalization turns out to be very hard to do, as the alternatives are even worse”). That meta level might be one that he doesn’t really think of as even being possible. One possible reason for this would be that he weren’t aware that anyone actually ever meant to refer to a meta level that high, so that he never developed a separate concept for it. Perhaps when he first encountered e.g. Eliezer’s account of the AI safety philosophy/engineering problem, the concept he came away with was based on a filled-in assumption about the default mistake that Eliezer must have made and the consequent meta level at which Eliezer meant to propose that the problem should be attacked, and that meta level was far too low for success to be conceivable, and he didn’t afterwards ever spontaneously find any reason to suppose you or Eliezer might not have made that mistake. Another possible reason would be that he disbelieved, on the above-mentioned a priori grounds, that the proposed meta level was possible at all. (Or, at least, that it could ever be safe to believe that it were possible, given the horrors perpetrated and threatened by other people who were comparably confident in their reasons for believing similar things.)
I don’t understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean “compete” in the sense of providing the most social good. Or something else?
I disagree with “hopelessly” “anthropomorphic” and “vague”, but “infeasible” I may very well agree with, if you mean something like it’s highly unlikely that a human team would succeed in creating a Friendly AGI before it’s too late to make a difference and without creating unacceptable risk, which is why I advocate more indirect methods of achieving it.
People are trying to design such algorithms, things like practical approximations to AIXI, or better alternatives to AIXI. Are you saying they should refrain from using the word “goals” until they have actually come up with concrete designs, or what? (Again I don’t advocate people trying to directly build AGIs, Friendly or otherwise, but your objection doesn’t seem to make sense.)
A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.
This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.
One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level (history and the evolution of institutions) are completely inacessible to the powers of the lesser level (the human). (The machines representing the two levels in this case might be “the mental states accessible to a single armchair philosophy community”, or, alternatively, “fledgling AI which, per a priori economic intuition, has no advantage over a few philosophers”, versus “the physical states accessible in human history”.)
This belief of his might be charged with a sort of independent half-intuitive aversion to making the sorts of (frequently catastrophic) mistakes that are routinely made by people who think they can metaphorically breach this complexity barrier. One effect of such an aversion would be that he would intuitively anticipate that he would always be, at least in expected value, wrong to agree with such people, no matter what arguments they could turn out to have. That is, it wouldn’t increase his expected rightness to check to see if they were right about some proposed procedure to get around the complexity barrier, because, intuitively, the prior probability that they were wrong, the conditional probability that they would still be wrong despite being persuasive by any conventional threshold, and the wrongness of the cost that had empirically been inflicted on the world by mistakes of that sort, would all be so high. (I took his reference to Hayek’s Fatal Conceit, and the general indirect and implicitly argued emotional dynamic of this interaction, to be confirmation of this intuitive aversion.) By describing this effect explicitly, I don’t mean to completely psychologize here, or make a status move by objectification. Intuitions like the one I’m attributing can (and very much should!), of course, be raised to the level of verbally presented propositions, and argued for explicitly.
(For what it’s worth, the most direct counter to the complexity argument expressed this way is: “with enough effort it is almost certainly possible, even from this side of the barrier, to formalize how to set into motion entities that would be on the other side of the barrier”. To cover the pragmatics of the argument, one would also need to add: “and agreeing that this amount of effort is possible can even be safe, so long as everyone who heard of your agreement was sufficiently strongly motivated not to attempt shortcuts”.)
Another, possibly overlapping reason would have to do with the meta level that people around here normally imagine approaching AI safety problems from—that being, “don’t even bother trying to invent all the required philosophy yourself; instead do your best to try to formalize how to mechanically refer to the process that generated, and could continue to generate, something equivalent to the necessary philosophy, so as to make that process happen better or at least to maximally stay out of its way” (“even if this formalization turns out to be very hard to do, as the alternatives are even worse”). That meta level might be one that he doesn’t really think of as even being possible. One possible reason for this would be that he weren’t aware that anyone actually ever meant to refer to a meta level that high, so that he never developed a separate concept for it. Perhaps when he first encountered e.g. Eliezer’s account of the AI safety philosophy/engineering problem, the concept he came away with was based on a filled-in assumption about the default mistake that Eliezer must have made and the consequent meta level at which Eliezer meant to propose that the problem should be attacked, and that meta level was far too low for success to be conceivable, and he didn’t afterwards ever spontaneously find any reason to suppose you or Eliezer might not have made that mistake. Another possible reason would be that he disbelieved, on the above-mentioned a priori grounds, that the proposed meta level was possible at all. (Or, at least, that it could ever be safe to believe that it were possible, given the horrors perpetrated and threatened by other people who were comparably confident in their reasons for believing similar things.)