But if FAI is based on CEV, then this will only happen if this is the extrapolated wish of everybody. Assuming existence of people truly (even after extrapolation) valuing Nature in its original form, such computroniums won’t be forcefully built.
gRR
But it’s the only relevant one, when we’re talking about CEV. CEV is only useful if FAI is created, so we can take it for granted.
there do seem to be people who care about that sort of thing.
Presumably, because their knowledge and intelligence are not extrapolated enough.
There are plenty of accounts of what’s going on with mathematics that don’t have mathematical terms referring to floaty mathematical entities
Could you list the one(s) that you find convincing? (even if this is somewhat off-topic in this thread...)
What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different
That is, IIUC, the “meaning” of a concept is not completely defined by its place within the mind’s conceptual structure. This seems correct, as the “meaning” is supposed to be about the correspondence between the map and the territory, an not about some topological property of the map.
the “meaning”, insofar as it involves “referring” to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it’s pretty tempting to think otherwise
The idea produces non-obvious results if you apply it to, for example, mathematical concepts. They certainly refer to something, which is therefore outside the mind. Conclusion: Hylaean Theoric World.
I think those people do have insufficient knowledge and intelligence. For example, the skoptsy sect, who believed they followed the God’s will, were, presumably, factually wrong. And people who want to end humanity for the sake of Nature, want that instrumentally—because they believe that otherwise Nature will be destroyed. Assuming FAI is created, this belief is also probably wrong.
You’re right in there being people who would place “all non-intelligent life” before “all people”, if there was such a choice. But that does not mean they would choose “non-intelligent life” before “non-intelligent life + people”.
“Interfering the least” implies no massive overpopulation. Please don’t read “keeping everything alive” too literally. It doesn’t mean no creature ever dies.
I did not mean the comment that literally. Dropped too many steps for brevity, thought they were clear, I apologize.
It would be just as impossible (or even more impossible) to convince people that total obliteration of people is a good thing. On the other hand, people don’t care much about bacteria, even whole species of them, and as long as a few specimens remain in laboratories, people will be ok about the rest being obliterated.
if only the FAI-building organization had enough human and financial resources, which unfortunately probably won’t be the case
Why do you think so? Do you expect an actual FAI-building organization to start working in the next few years? Because, assuming the cautionary position is actually the correct one, then FAI organization will surely get lots of people and resources in time?
They are probably lying, trolling, joking, or psychos (=do not have enough extrapolated intelligence and knowledge).
I think it would be impossible to convince people (assuming suitably extrapolated intelligence and knowledge) that total obliteration of all life on Earth is a good thing, no matter the order of arguments. And this is a very good value for a FAI. If it optimizes this (saves life) and otherwise interferes the least, it already done excellent.
it’s pretty hard to do better than to just not interfere except to keep the ecosystem from collapsing
Isn’t this exactly what we wish FAI to do—interfere the least while keeping everything alive?
By (super-)intelligence I mean EY’s definition, as a powerful general-purpose optimization process. It does not need to actually know about natural language or our universe to be AI-complete. A potential to learn them is sufficient. Abstract mathematics is arbitrarily complex, so sufficiently powerful optimization process in this domain will have to be sufficiently general for everything.
Solving problems in abstract mathematics can be immensely useful even by itself, I think. Note: physics knowledge at low levels is indistinguishable from mathematics. But the main use of the system would be—safely studying the behavior of a (super-)intelligence, in preparation for a true FAI.
Thoughts on problem 3:
def P1(): sumU = 0; for(#U=1; #U<3^^^3; #U++): if(#U encodes a well-defined boundedly-recursive parameterless function, that calls an undefined single-parameter function "A" with #U as a parameter): sumU += eval(#U + #A) return sumU def P2(): sumU = 0; for(#U=1; #U<3^^^3; #U++): if(#U encodes a well-defined boundedly-recursive parameterless function that calls an undefined single-parameter function "A" with #U as a parameter): code = A(#P2) sumU += eval(#U + code) return sumU def A(#U): Enumerate proofs by length L = 1 ... INF if found any proof of the form "A()==a implies eval(#U + #A)==u, and A()!=a implies eval(#U + #A)<=u" break; Enumerate proofs by length up to L+1 (or more) if found a proof that A()!=x return x return a
Although A(#P2) won’t return #A, I think eval(A(#P2)(#P2)) will return A(#P2), which will therefore be the answer to the reflection problem.
To tell the truth, I just wanted to write something, to generate some activity. The original post seems important and useful, in that it states several well-defined and interesting problems. Seeing it staying alone in a relative obscurity of an Open Thread even for a day was a little disheartening :)
2) Can we write a version of this program that would reject at least some spurious proofs?
It’s trivial to do at least some:
def A(P): if P is a valid proof that A(P)==a implies U()==u, and A(P)!=a implies U()<=u and P does not contain a proof step "A(P)=x" or "A(P)!=x" for any x: return a else: do whatever
Well, yes, it is a tool AI. But it does have an utility function, it can be built upon a decision theory, etc, and in this sense, it is Agent.
For example, U = 1/T, where T is the time (measured in virtual computing environment cycles) until the correct output is produced.
Thanks for the answer! But I am still confused regarding the ontological status of “2” under many of the philosophical positions. Or, better yet, the ontological status of the real numbers field R. Formalism and platonism are easy: under formalism, R is a symbol that has no referent. Under platonism, R exists in the HTW. If I understand your preferred position correctly, it says: “any system that satisfies axioms of R also satisfies the various theorems about it”. But, assuming the universe is finite or discrete, there is no physical system that satisfies axioms of R. Does it mean your position reduces to formalism then?