You’re demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you’re talking about the AI using fuzzy high level human concepts like “morals” and “philosophies” instead of as algorithms and code.
I suggest you try to write code that “figures out a worthwhile moral goal” (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you’re going to run into a bit of a bootstrapping problem. The result is not the best program ever, it’s no program at all.
To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don’t bring themselves into existence.
Well, I don’t expect to need to write code that does that explicitly. A sufficiently powerful machine learning algorithm with sufficient computational resources should be able to:
1) Learn basic perceptions like vision and hearing.
2) Learn higher level feature extraction to identify objects and create concepts of the world.
3) Learn increasingly higher level concepts and how to reason with them.
4) Learn to reason about morals and philosophies.
Brains already do this, so its reasonable to assume it can be done. And yes, I am advocating a Bottom Up approach to A.I. rather than the Top Down approach Mr. Yudkowsky seems to prefer.
You’re demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you’re talking about the AI using fuzzy high level human concepts like “morals” and “philosophies” instead of as algorithms and code.
I suggest you try to write code that “figures out a worthwhile moral goal” (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you’re going to run into a bit of a bootstrapping problem. The result is not the best program ever, it’s no program at all.
This is totally possible, you just do something like this:
It’s called a Quine.
To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don’t bring themselves into existence.
Good catch on that ambiguity, though.
That’s what I thought of at first too.
I think he means a program that is the designer of itself. A quine is something that you wrote that writes a copy of itself.
Well, I don’t expect to need to write code that does that explicitly. A sufficiently powerful machine learning algorithm with sufficient computational resources should be able to:
1) Learn basic perceptions like vision and hearing. 2) Learn higher level feature extraction to identify objects and create concepts of the world. 3) Learn increasingly higher level concepts and how to reason with them. 4) Learn to reason about morals and philosophies.
Brains already do this, so its reasonable to assume it can be done. And yes, I am advocating a Bottom Up approach to A.I. rather than the Top Down approach Mr. Yudkowsky seems to prefer.