An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
I’m sorry, this is just plain not valid. I’ve already explained why. An AI that is “as computationally expensive as a human” is no more likely to be “much better at the things people are phenomenally bad at” than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..).
I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen—it’s just not very likely at all.
I’m not sure exactly what part of my statement you disagree with.
1. People are phenomenally bad at some things.
A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it.
2. An AGI would be better than people in the areas where humans suck
I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don’t see why an AGI wouldn’t pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong.
I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python.
II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning.
III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with)
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
Human-equivalent AGIs. That’s a vital element, here. There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called “human-equivalent”. Karl Childers rather than Sherlock Holmes.
There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas.
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn’t take general superiority.
There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely.
There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system?
Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning value is the most efficient system?
Why is a superior interface unlikely?
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Because it will also require translation from one vehicle to another.
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
And, as I have also said; early AGIs are likely to be idiots, not geniuses.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
Basically what you are saying is that any AGI will be functionally identical to a human.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.
I’m sorry, this is just plain not valid. I’ve already explained why. An AI that is “as computationally expensive as a human” is no more likely to be “much better at the things people are phenomenally bad at” than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..).
I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen—it’s just not very likely at all.
I’m not sure exactly what part of my statement you disagree with.
1. People are phenomenally bad at some things.
A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it.
2. An AGI would be better than people in the areas where humans suck
I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don’t see why an AGI wouldn’t pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong.
I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python.
II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning.
III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with)
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
Human-equivalent AGIs. That’s a vital element, here. There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called “human-equivalent”. Karl Childers rather than Sherlock Holmes.
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn’t take general superiority.
There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system?
Why is a superior interface unlikely?
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
Is that “Humans can also improve their interfacing with computers” or “Humans can improve their interfacing with computers as well as AGI could”?
Edited.
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
I strongly disagree, and I have given reasons why this is so.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.