But if you make systematic mistakes in thinking, you will only be making them faster.
But you can get away with more mistakes, if you can loop your test and improve cycle to fix those mistakes.
There was a demo that really brought this home to me. Some robotic fingers dribbling a ping pong ball at blinding speed. Fast cameras, fast actuators, brute force stupid feedback calculations. Stupid can be good enough if you’re fast enough.
For more human creative processes, speeding up the design/test/evaluate loop will often beat more genius. Many things aren’t to be reasoned out as much tested out.
I have this intuition that higher intelligence “unlocks” some options, and then it depends on the speed how much many points you get from the unlocked options. For example, if you have a ping-pong-playing robot with insane speed, such robot could easily win any ping-pong tournament. But still couldn’t conquer the world, for example. His intelligence only unlocks the area of playing ping-pong. If the intelligence is not general, making it faster still doesn’t make it general.
For general intelligences, if we ignore the time and resources, the greatest obstacle to a mind is the mind itself, its own biases. If the mind is prone to do really stupid things, giving it more power will allow it to do stupid things with greater impact. For example, if someone chooses to ignore feedback, then having more design/test/evaluate cycles available will not help.
Now let’s assume that we have an intelligence which is (a) general, and (b) willing to experiment and learn from feedback. On this level, is time and resources all that matters? Would any mind on this level, given unlimited time (immoratlity) and resources, sooner or later become a god? Or is the path full of dangerous destructive attractors? Would the mind be able to successfully navigate higher and higher levels of meta-thinking, or could a mistake at some level prevent it from ever getting higher? In other words, is “don’t ignore the feedback” the only issue to overcome, or is it just a first of many increasingly abstract issues that an increasingly powerful mind will have to deal with, where a failure to deal with any of them could “lock” the path to godhood even given unlimited time and resources? For example, imagine a mind that would be willing to consider feedback, but wouldn’t care about developing a good theory of maths and statistics. At some moment, it would be making incorrect conclusions from the feedback.
I agree that for humans, lack of time and resources is a huge issue.
But you can get away with more mistakes, if you can loop your test and improve cycle to fix those mistakes.
There was a demo that really brought this home to me. Some robotic fingers dribbling a ping pong ball at blinding speed. Fast cameras, fast actuators, brute force stupid feedback calculations. Stupid can be good enough if you’re fast enough.
For more human creative processes, speeding up the design/test/evaluate loop will often beat more genius. Many things aren’t to be reasoned out as much tested out.
I have this intuition that higher intelligence “unlocks” some options, and then it depends on the speed how much many points you get from the unlocked options. For example, if you have a ping-pong-playing robot with insane speed, such robot could easily win any ping-pong tournament. But still couldn’t conquer the world, for example. His intelligence only unlocks the area of playing ping-pong. If the intelligence is not general, making it faster still doesn’t make it general.
For general intelligences, if we ignore the time and resources, the greatest obstacle to a mind is the mind itself, its own biases. If the mind is prone to do really stupid things, giving it more power will allow it to do stupid things with greater impact. For example, if someone chooses to ignore feedback, then having more design/test/evaluate cycles available will not help.
Now let’s assume that we have an intelligence which is (a) general, and (b) willing to experiment and learn from feedback. On this level, is time and resources all that matters? Would any mind on this level, given unlimited time (immoratlity) and resources, sooner or later become a god? Or is the path full of dangerous destructive attractors? Would the mind be able to successfully navigate higher and higher levels of meta-thinking, or could a mistake at some level prevent it from ever getting higher? In other words, is “don’t ignore the feedback” the only issue to overcome, or is it just a first of many increasingly abstract issues that an increasingly powerful mind will have to deal with, where a failure to deal with any of them could “lock” the path to godhood even given unlimited time and resources? For example, imagine a mind that would be willing to consider feedback, but wouldn’t care about developing a good theory of maths and statistics. At some moment, it would be making incorrect conclusions from the feedback.
I agree that for humans, lack of time and resources is a huge issue.