This probably wouldn’t work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that ‘pretty smart’ thing’s code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizing their utility functions are killed off.
How big do you reckon the dumbest AI capable of fooming would be? Has anyone tried just generating random 100k-character brainfuck programs?
Has anyone tried just generating random 100k-character brainfuck programs?
That’s an awfully large search space, with highly nonlinear dynamics, a small target, and might still not be enough to encode what we need to encode. I don’t see that approach as very likely to work.
It’s unlikely we’d ever generate something smart enough to be worth keeping yet dumb enough not to kill us. Also, where do you get your friendly utility function from?
Has anyone tried just generating random 100k-character brainfuck programs?
There’s no way that is going to work, think of how many possible 100k-characte Brainfuck programs there are. Brainfuck does have the nice characteristic that each program is syntactically valid, but then you have the problem of running them, which is very resource-intensive (you would expect AI to be slow, so you need very large time-outs, which means you test very few programs every time-interval). Speaking of Brainfuck: http://www.vetta.org/2011/11/aiq/
This probably wouldn’t work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that ‘pretty smart’ thing’s code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizing their utility functions are killed off.
How big do you reckon the dumbest AI capable of fooming would be? Has anyone tried just generating random 100k-character brainfuck programs?
That’s an awfully large search space, with highly nonlinear dynamics, a small target, and might still not be enough to encode what we need to encode. I don’t see that approach as very likely to work.
It’s unlikely we’d ever generate something smart enough to be worth keeping yet dumb enough not to kill us. Also, where do you get your friendly utility function from?
There’s no way that is going to work, think of how many possible 100k-characte Brainfuck programs there are. Brainfuck does have the nice characteristic that each program is syntactically valid, but then you have the problem of running them, which is very resource-intensive (you would expect AI to be slow, so you need very large time-outs, which means you test very few programs every time-interval). Speaking of Brainfuck: http://www.vetta.org/2011/11/aiq/