They’re not explicitly trying to solve this problem because they don’t think it’s going to be a problem with their current approach of implementing goals.
They do not expect foom either.
Well such an AGI isn’t very useful
You can still have formally defined goals—satisfy conditions on equations, et cetera. Defined internally, without the problematic real world component. Use this for e.g. designing reliable cellular machinery (‘cure cancer and senescence’). Seems very useful to me.
so wouldn’t they just keep trying until they stumble onto a motivational system that isn’t so prone to nihilism?
How long would it take you to ‘stumble’ upon some goal for the UDT that translates to something actually real?
Similarly, if we let evolution of humans continue, wouldn’t humans pretty soon have a motivational system for reproduction that we won’t want to cleverly work around?
The evolution destructively tests designs against reality. Humans do have various motivational systems there, such as religion, btw.
I am not sure how you think a motivational system for reproduction could work, so that we would not embrace a solution that actually does not result in reproduction. (Given sufficient intelligence)
You can still have formally defined goals—satisfy conditions on equations, et cetera.
As I mentioned, there are AGI researchers trying to implement real-world goals right now. If they build an AGI that turns nihilistic, do you think they will just give up and start working on equation solvers instead, or try to “fix” their AGI?
How long would it take you to ‘stumble’ upon some goal for the UDT that translates to something actually real?
I guess probably not very long, if I had a working solution to “math intuition”, a sufficiently powerful computer to experiment with, and no concerns for safety...
They do not expect foom either.
You can still have formally defined goals—satisfy conditions on equations, et cetera. Defined internally, without the problematic real world component. Use this for e.g. designing reliable cellular machinery (‘cure cancer and senescence’). Seems very useful to me.
How long would it take you to ‘stumble’ upon some goal for the UDT that translates to something actually real?
The evolution destructively tests designs against reality. Humans do have various motivational systems there, such as religion, btw.
I am not sure how you think a motivational system for reproduction could work, so that we would not embrace a solution that actually does not result in reproduction. (Given sufficient intelligence)
Goertzel does, or at least thinks it’s possible. See http://lesswrong.com/lw/aw7/muehlhausergoertzel_dialogue_part_1/ where he says “GOLEM is a design for a strongly self-modifying superintelligent AI system”. Also http://novamente.net/AAAI04.pdf where he talks about Novamente potentially being “thoroughly self-modifying and self-improving general intelligence”.
As I mentioned, there are AGI researchers trying to implement real-world goals right now. If they build an AGI that turns nihilistic, do you think they will just give up and start working on equation solvers instead, or try to “fix” their AGI?
I guess probably not very long, if I had a working solution to “math intuition”, a sufficiently powerful computer to experiment with, and no concerns for safety...