Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Computers already can outperform you in a wide variety of tasks.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
As for Solomonoff induction… What do you think your brain is doing when you are thinking?
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
Solomonoff induction is so much thinking that it is incomputable.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.
But is it possible to have power without all the rest?
Certainly. Why not?
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.