This is a brief summary of what I consider the software based non-fooming scenario.
Terminology
Impact—How much the agent can make the world how it wants (hit a small target in search space, etc)
Knowledge = correct programming.
General Outlook/philosophy
Do not assume that an agent knows everything, assume that an agent has to start off with no program to run. Try and figure out where and how it gets the information for the program. And whether it can be mislead by sources of information.
General Suppositions
High impact requires that someone (either the creator or the agent) have a high knowledge of the world in order for the system to be appropriate. And the right knowledge; knowing trillions of digits of pi is generally not as useful as where the oil reserves are when trying to take over the world.
Usefulness of knowledge is not inherently obvious. It is also changeable. The knowledge of how to efficiently get blubber from a whale is less useful now we have oil.
Knowledge can be acquired through social means, derived from prior knowledge and experience or experimentally.
Moving beyond your current useful knowledge requires luck in picking the right things to analyse for statistical correlations.
Knowledge can rely on other knowledge.
Historical Explanations
Evolution can gather knowledge.
Brains can gather knowledge. It is monumentally wasteful if the individual dies and the knowledge is not passed on, as the next generation has to reinvent the wheel each time.
Lots of the increase in the impact of individual that has happened through evolution has been due to passing of knowledge between individuals, not improvements in base algorithms for deriving knowledge from sense data. This is especially true in the case of humans with language. High G/IQ may be based on the ability to get knowledge from other humans, thus being able to have higher impact.
-- Acquiring knowledge from other humans might not always be good as they lie and manipulate. Lots of rubbish out there. Hard to distinguish good from bad (requires knowledge), but still easier than reinventing the wheel. Equivalents of malware detectors needed for bad stuff. The malware detectors might falsely recognise good knowledge as bad programming.
Computational resources are only useful in so much as you have the correct knowledge to make use of them. If you can only find simple useful models you don’t need more resources.
Future
The fastest scenario is that AIs can quickly expand computational resources to enable it to make use of all of human knowledge. After that it is “slow”.
—More likely we are going to have problems with AIs not being able to inherently recognise things like post-modernist thinking as dead ends of human thought. It naturally lacks all the knowledge that evolution has given humans as a shared base for understanding/predicting each other, so we might expect it to have lots of problems with this regard.
This is a brief summary of what I consider the software based non-fooming scenario.
Terminology
Impact—How much the agent can make the world how it wants (hit a small target in search space, etc) Knowledge = correct programming.
General Outlook/philosophy
Do not assume that an agent knows everything, assume that an agent has to start off with no program to run. Try and figure out where and how it gets the information for the program. And whether it can be mislead by sources of information.
General Suppositions
High impact requires that someone (either the creator or the agent) have a high knowledge of the world in order for the system to be appropriate. And the right knowledge; knowing trillions of digits of pi is generally not as useful as where the oil reserves are when trying to take over the world.
Usefulness of knowledge is not inherently obvious. It is also changeable. The knowledge of how to efficiently get blubber from a whale is less useful now we have oil.
Knowledge can be acquired through social means, derived from prior knowledge and experience or experimentally.
Moving beyond your current useful knowledge requires luck in picking the right things to analyse for statistical correlations.
Knowledge can rely on other knowledge.
Historical Explanations
Evolution can gather knowledge.
Brains can gather knowledge. It is monumentally wasteful if the individual dies and the knowledge is not passed on, as the next generation has to reinvent the wheel each time.
Lots of the increase in the impact of individual that has happened through evolution has been due to passing of knowledge between individuals, not improvements in base algorithms for deriving knowledge from sense data. This is especially true in the case of humans with language. High G/IQ may be based on the ability to get knowledge from other humans, thus being able to have higher impact.
-- Acquiring knowledge from other humans might not always be good as they lie and manipulate. Lots of rubbish out there. Hard to distinguish good from bad (requires knowledge), but still easier than reinventing the wheel. Equivalents of malware detectors needed for bad stuff. The malware detectors might falsely recognise good knowledge as bad programming.
Computational resources are only useful in so much as you have the correct knowledge to make use of them. If you can only find simple useful models you don’t need more resources.
Future
The fastest scenario is that AIs can quickly expand computational resources to enable it to make use of all of human knowledge. After that it is “slow”. —More likely we are going to have problems with AIs not being able to inherently recognise things like post-modernist thinking as dead ends of human thought. It naturally lacks all the knowledge that evolution has given humans as a shared base for understanding/predicting each other, so we might expect it to have lots of problems with this regard.