In all seriousness, there’s a lot you’re saying that seems contradictory at first glance. A few snippets:
My message for friendly AI researchers is not that computational epistemology is invalid, or that it’s wrong to think about the mind as a state machine, just that all that isn’t the full story.
If computation epistemology is not the full story, if true epistemology for a conscious being is “something more”, then you are saying that it is so incomplete as to be invalid. (Doesn’t Searle hold similar beliefs, along the lines of “consciousness is something that brain matter does”? No uploading for you two!)
I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse
I’m not sure you appreciate the distance to go “just” in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.
The question of which “apriori beliefs” are supposed to be programmed or not programmed in the AI is so far off as to be irrelevant.
Also note that if it those beliefs turn out not be an invariant in respect to friendliness (and why should they?), they are going to be updated until they converge towards more accurate beliefs anyways.
For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly (“true ontology”), value the world correctly (“true morality”), and it would need to outsmart its opponents (“win the intelligence race”).
“Ontology + morals” corresponds to “model of the current state of the world + actions to change it”, and the efficiency of those actions equals “intelligence”. An agent’s intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property but is inherent in your so-called “true ontology”.
If computation epistemology is not the full story, if true epistemology for a conscious being is “something more”, then you are saying that it is so incomplete as to be invalid.
It’s a valid way to arrive at a state-machine model of something. It just won’t tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I’m not sure you appreciate the distance to go “just” in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.
I do know that there’s lots of work to be done. But this is what Eliezer’s sequence will be about.
An agent’s intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program’s IQ depends on the domain being tested), but on a practical level, there’s no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI’s goals are. Otherwise all chess programs would be equally good.
OK.
In all seriousness, there’s a lot you’re saying that seems contradictory at first glance. A few snippets:
If computation epistemology is not the full story, if true epistemology for a conscious being is “something more”, then you are saying that it is so incomplete as to be invalid. (Doesn’t Searle hold similar beliefs, along the lines of “consciousness is something that brain matter does”? No uploading for you two!)
I’m not sure you appreciate the distance to go “just” in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.
The question of which “apriori beliefs” are supposed to be programmed or not programmed in the AI is so far off as to be irrelevant.
Also note that if it those beliefs turn out not be an invariant in respect to friendliness (and why should they?), they are going to be updated until they converge towards more accurate beliefs anyways.
“Ontology + morals” corresponds to “model of the current state of the world + actions to change it”, and the efficiency of those actions equals “intelligence”. An agent’s intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property but is inherent in your so-called “true ontology”.
Still upvoted.
It’s a valid way to arrive at a state-machine model of something. It just won’t tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I do know that there’s lots of work to be done. But this is what Eliezer’s sequence will be about.
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program’s IQ depends on the domain being tested), but on a practical level, there’s no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI’s goals are. Otherwise all chess programs would be equally good.