There is no reason to believe a non-sentient program will ever escape it’s local maxima. We have not yet devised the optimization process that will provably not get stuck in a local maxima in bounded time. If you give this optimizer the MU Puzzle (aka 2^n mod 3 = 0) it will never figure it out, even though most children will come to the right answer in minutes. That’s what’s so great about consciousness that we don’t understand yet. Creating a program which can solve this class of problems is the creation of artificial consciousness, full stop.
“Well it self improves so it’ll improve to the point it solves it” How? And don’t say complexity or emergence. And how can you prove that it’s more likely to self-improve into having artificial consciousness within, say, 10 billion years. Theoretically, a program that randomly put down characters into a text file and tried to compile it would eventually create an AI too. But there’s no reason to think it would do so before the heat death of the universe came knocking.
The words “paperclip maximizer is not a specification, just like “friendly AI” is not a specification. Those are both suggestively named LISP tokens. An actual specification for friendly AI is a blueprint for it, the same way that human DNA is a specification for the human body. “Featherless biped with two arms, two legs, a head with two eyes, two ears, a nose, and the ability to think.” Is not a specification for humans, it’s a description. You could come up with any number of creatures from that description. The base sequence of our DNA which will create a human and nothing but a human is a specification. Until you have a set of directions that create a friendly AI and nothing but a friendly AI, you haven’t got specs for them. And by the time you have that, you can just build a friendly AI.
I hope jacobt doesn’t think something like this can be implemented easily; I see it as a proposal for safely growing a seed AI if we had the relevant GAI insights to make a suitable seed (with simple initial goals). I agree with you that we don’t currently have the conceptual background needed to write such a seed.
I think we disagree on what a specification is. By specification I mean a verifier: if you had something fitting the specification, you could tell if it did. For example we have a specification for “proof that P != NP” because we have a system in which that proof could be written and verified. Similarly, this system contains a specification for general optimization. You seem to be interpreting specification as knowing how to make the thing.
If you give this optimizer the MU Puzzle (aka 2^n mod 3 = 0) it will never figure it out, even though most children will come to the right answer in minutes.
If you define the problem as “find n such that 2^n mod 3 = 0” then everyone will fail the problem. And I don’t see why the optimizer couldn’t have some code that monitors its own behavior. Sure it’s difficult to write, but the point of this system is to go from a seed AI to a superhuman AI safely. And such a function (“consciousness”) would help it solve many of the sample optimization problems without significantly increasing complexity.
There is no reason to believe a non-sentient program will ever escape it’s local maxima. We have not yet devised the optimization process that will provably not get stuck in a local maxima in bounded time. If you give this optimizer the MU Puzzle (aka 2^n mod 3 = 0) it will never figure it out, even though most children will come to the right answer in minutes. That’s what’s so great about consciousness that we don’t understand yet. Creating a program which can solve this class of problems is the creation of artificial consciousness, full stop.
“Well it self improves so it’ll improve to the point it solves it” How? And don’t say complexity or emergence. And how can you prove that it’s more likely to self-improve into having artificial consciousness within, say, 10 billion years. Theoretically, a program that randomly put down characters into a text file and tried to compile it would eventually create an AI too. But there’s no reason to think it would do so before the heat death of the universe came knocking.
The words “paperclip maximizer is not a specification, just like “friendly AI” is not a specification. Those are both suggestively named LISP tokens. An actual specification for friendly AI is a blueprint for it, the same way that human DNA is a specification for the human body. “Featherless biped with two arms, two legs, a head with two eyes, two ears, a nose, and the ability to think.” Is not a specification for humans, it’s a description. You could come up with any number of creatures from that description. The base sequence of our DNA which will create a human and nothing but a human is a specification. Until you have a set of directions that create a friendly AI and nothing but a friendly AI, you haven’t got specs for them. And by the time you have that, you can just build a friendly AI.
I hope jacobt doesn’t think something like this can be implemented easily; I see it as a proposal for safely growing a seed AI if we had the relevant GAI insights to make a suitable seed (with simple initial goals). I agree with you that we don’t currently have the conceptual background needed to write such a seed.
I think we disagree on what a specification is. By specification I mean a verifier: if you had something fitting the specification, you could tell if it did. For example we have a specification for “proof that P != NP” because we have a system in which that proof could be written and verified. Similarly, this system contains a specification for general optimization. You seem to be interpreting specification as knowing how to make the thing.
If you define the problem as “find n such that 2^n mod 3 = 0” then everyone will fail the problem. And I don’t see why the optimizer couldn’t have some code that monitors its own behavior. Sure it’s difficult to write, but the point of this system is to go from a seed AI to a superhuman AI safely. And such a function (“consciousness”) would help it solve many of the sample optimization problems without significantly increasing complexity.