Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Because it will also require translation from one vehicle to another.
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
And, as I have also said; early AGIs are likely to be idiots, not geniuses.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
Basically what you are saying is that any AGI will be functionally identical to a human.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
I strongly disagree, and I have given reasons why this is so.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.