If computation epistemology is not the full story, if true epistemology for a conscious being is “something more”, then you are saying that it is so incomplete as to be invalid.
It’s a valid way to arrive at a state-machine model of something. It just won’t tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I’m not sure you appreciate the distance to go “just” in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.
I do know that there’s lots of work to be done. But this is what Eliezer’s sequence will be about.
An agent’s intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program’s IQ depends on the domain being tested), but on a practical level, there’s no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI’s goals are. Otherwise all chess programs would be equally good.
It’s a valid way to arrive at a state-machine model of something. It just won’t tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I do know that there’s lots of work to be done. But this is what Eliezer’s sequence will be about.
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program’s IQ depends on the domain being tested), but on a practical level, there’s no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI’s goals are. Otherwise all chess programs would be equally good.