Orthogonality doesn’t say anything about a goal ‘selecting for’ general intelligence in some type of evolutionary algorithm. I think that it is an interesting question: for what tasks is GI optimal besides being an animal? Why do we have GI?
But the general assumption in Orthogonality Thesis is that the programmer created a system with general intelligence and a certain goal (intentionally or otherwise) and the general intelligence may have been there from the first moment of the program’s running, and the goal too.
Also note that Orthogonality predates the recent popularity of these predict-the-next-token type AI’s like GTP which don’t resemble what people were expecting to be the next big thing in AI at all, as it’s not clear what it’s utility function is.
Gradient descent is what GPT-3 uses, I think, but humans wrote the equation by which the naive network gets its output(the next token prediction) ranked (for likeliness compared to the training data in this case). That’s it’s utility function right there, and that’s where we program in its (arbitrarily simple) goal. It’s not JUST a neural network. All ANN have another component.
Simple goals do not mean simple tasks.
I see what you mean that you can’t ‘force it’ to become general with a simple goal but I don’t think this is a problem.
For example: the simple goal of tricking humans out of as much of their money as possible is very simple indeed, but the task would pit the program against our collective general intelligence. A hill climbing optimization process could, with enough compute, start with inept ‘you won a prize’ popups and eventually create something with superhuman general intelligence with that goal.
It would have to be in perpetual training, rather then GPT-3′s train-then-use. Or was that GPT-2?
(Lots of people are trying to use computer programs for this right now so I don’t need to explain that many scumbags would try to create something like this!)
Orthogonality doesn’t say anything about a goal ‘selecting for’ general intelligence in some type of evolutionary algorithm. I think that it is an interesting question: for what tasks is GI optimal besides being an animal? Why do we have GI?
But the general assumption in Orthogonality Thesis is that the programmer created a system with general intelligence and a certain goal (intentionally or otherwise) and the general intelligence may have been there from the first moment of the program’s running, and the goal too.
Also note that Orthogonality predates the recent popularity of these predict-the-next-token type AI’s like GTP which don’t resemble what people were expecting to be the next big thing in AI at all, as it’s not clear what it’s utility function is.
We can’t program AI, so stuff about programming is disconnected from reality.
By “selection”, I was referring to selection like optimisation processes (e.g. stochastic gradient descent, Newton’s method, natural selection, etc.].
Gradient descent is what GPT-3 uses, I think, but humans wrote the equation by which the naive network gets its output(the next token prediction) ranked (for likeliness compared to the training data in this case). That’s it’s utility function right there, and that’s where we program in its (arbitrarily simple) goal. It’s not JUST a neural network. All ANN have another component.
Simple goals do not mean simple tasks.
I see what you mean that you can’t ‘force it’ to become general with a simple goal but I don’t think this is a problem.
For example: the simple goal of tricking humans out of as much of their money as possible is very simple indeed, but the task would pit the program against our collective general intelligence. A hill climbing optimization process could, with enough compute, start with inept ‘you won a prize’ popups and eventually create something with superhuman general intelligence with that goal.
It would have to be in perpetual training, rather then GPT-3′s train-then-use. Or was that GPT-2?
(Lots of people are trying to use computer programs for this right now so I don’t need to explain that many scumbags would try to create something like this!)