But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,
And this is before the computer uses it’s knowledge to reoptimize it’s optimization process.
I understand the concept of recursive self-optimization und I don’t consider it to be very implausible.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I’m also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I count “algorithm-space is really really really big” as at least some form of evidence. ;)
Mind you by “is there any evidence?” you really mean “does the evidence lead to a high assigned probability?” That being the case “No Free Lunch” must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.
Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it’s something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,
And this is before the computer uses it’s knowledge to reoptimize it’s optimization process.
I understand the concept of recursive self-optimization und I don’t consider it to be very implausible.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I’m also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
I count “algorithm-space is really really really big” as at least some form of evidence. ;)
Mind you by “is there any evidence?” you really mean “does the evidence lead to a high assigned probability?” That being the case “No Free Lunch” must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.
Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it’s something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.