A human is a counterexample. A human emulation would count as an AI, so human behavior is one possible AI behavior. Richard’s argument is that humans don’t respond to orders or requests in anything like these brittle, GOFAI-type systems invoked by the word “formal systems”. You’re not considering that possibility. You’re still thinking in terms of formal systems.
(Unpacking the significant differences between how humans operate, and the default assumptions that the LW community makes about AI, would take… well, five years, maybe ten.)
A human emulation would count as an AI, so human behavior is one possible AI behavior.
Uhh, no. Look, humans respond to orders and requests in the way that we do because we tend to care what the person giving the request actually wants. Not because we’re some kind of “informal system”. Any computer program is a formal system, but there are simply more and less complex ones. All you are suggesting is building a very complex (“informal”) system and hoping that because it’s complex (like humans!) it will behave in a humanish way.
Your response avoids the basic logic here. A human emulation would count as an AI, therefore human behavior isone possible AI behavior. There is nothing controversial in the statement; the conclusion is drawn from the premise. If you don’t think a human emulation would count as AI, or isn’t possible, or something else, fine, but… why wouldn’t a human emulation count as an AI? How, for example, can we even think about advanced intelligence, much less attempt to model it, without considering human intelligence?
...humans respond to orders and requests in the way that we do because we tend to care what the person giving the request actually wants.
I don’t think this is generally an accurate (or complex) description of human behavior, but it does sound to me like an “informal system”—i.e. we tend to care. My reading of (at least this part of) PhilGoetz’s position is that it makes more sense to imagine something we would call an advanced or super AI responding to requests and commands with a certain nuance of understanding (as humans do) than with the inflexible (“brittle”) formality of, say, your average BASIC program.
The thing is, humans do that by… well, not being formal systems. Which pretty much requires you to keep a good fraction of the foibles and flaws of a nonformal, nonrigorously rational system.
You’d be more likely to get FAI, but FAI itself would be devalued, since now it’s possible for the FAI itself to make rationality errors.
A human is a counterexample. A human emulation would count as an AI, so human behavior is one possible AI behavior. Richard’s argument is that humans don’t respond to orders or requests in anything like these brittle, GOFAI-type systems invoked by the word “formal systems”. You’re not considering that possibility. You’re still thinking in terms of formal systems.
(Unpacking the significant differences between how humans operate, and the default assumptions that the LW community makes about AI, would take… well, five years, maybe ten.)
Uhh, no. Look, humans respond to orders and requests in the way that we do because we tend to care what the person giving the request actually wants. Not because we’re some kind of “informal system”. Any computer program is a formal system, but there are simply more and less complex ones. All you are suggesting is building a very complex (“informal”) system and hoping that because it’s complex (like humans!) it will behave in a humanish way.
Your response avoids the basic logic here. A human emulation would count as an AI, therefore human behavior is one possible AI behavior. There is nothing controversial in the statement; the conclusion is drawn from the premise. If you don’t think a human emulation would count as AI, or isn’t possible, or something else, fine, but… why wouldn’t a human emulation count as an AI? How, for example, can we even think about advanced intelligence, much less attempt to model it, without considering human intelligence?
I don’t think this is generally an accurate (or complex) description of human behavior, but it does sound to me like an “informal system”—i.e. we tend to care. My reading of (at least this part of) PhilGoetz’s position is that it makes more sense to imagine something we would call an advanced or super AI responding to requests and commands with a certain nuance of understanding (as humans do) than with the inflexible (“brittle”) formality of, say, your average BASIC program.
The thing is, humans do that by… well, not being formal systems. Which pretty much requires you to keep a good fraction of the foibles and flaws of a nonformal, nonrigorously rational system.
You’d be more likely to get FAI, but FAI itself would be devalued, since now it’s possible for the FAI itself to make rationality errors.
More likely, really?
You’re essentially proposing giving a human Ultimate Power. I doubt that will go well.
Iunno. Humans are probably less likely to go horrifically insane with power than the base chance of FAI.
Your chances aren’t good, just better.