They chose a limited domain and then designed and used an algorithm that works in that domain – which constitutes domain knowledge. The paper’s claim is blatantly false; you are gullible and appealing to authority.
Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.
I was aware of AlphaGo Zero before I posted—check out my link. Note that it can’t even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can’t learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.
Unreason is accepting the claims of a paper at face value, appealing to its authority
Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.
I was aware of AlphaGo Zero before I posted—check out my link
AlphaGo Zero and AlphaZero are different things—check out my link.
In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it’s a universal knowledge creator?
In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it’s a universal knowledge creator?
If it could figure out the rules of any game that would be remarkable. That logic would also really help to find bugs in programs or beat the stock market.
If they wanna convince anyone it isn’t using domain-specific knowledge created by the programmers, why don’t they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can’t.
If it really has nothing domain specific, why can’t it work with ANY domain?
You’re describing what’s known as General game playing.
you program an AI which will play a set of games, you don’t know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.
This is in fact a field in AI.
also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it’s ancestor AlphaGoZero) hinting that they’re generalising it very successfully.
Here are some examples of domains other than game playing: architecture, chemistry, cancer research, website design, cryonics research, astrophysics, poetry, painting, political campaign running, dog toy design, knitting.
The fact that the self-play method works well for chess but not poetry is domain knowledge the programmers had, not something alphazero figured out for itself.
This again feels like one of those things that creeps the second anyone points you to examples.
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you’d just declare that those weren’t different enough domains because they’re all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
Nothing to see here everyone.
This is just yet another boring iteration of the forever shifting goalposts of AI .
Nothing to see here; just another boring iteration of the absurd idea of “shifting goalposts.”
There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.
Adam and Eve AI’s. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.
Rather than being designed to do X with yeast it’s basically told “go look at yeast” and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .
It’s a remarkable system and could be extremely useful for scientists in many sectors but it’s a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.
This AI is not a pianist robot and doesn’t play chess but has broad potential applications across many areas of science.
It blows a hole in the side of the “Universal Knowledge Creator” idea since it’s a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there’s some magical UKC line or category (which humans technically don’t qualify for yet anyway) is based on literally nothing except feelings. there’s not an ounce of logic or evidence behind it.
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you’d just declare that those weren’t different enough domains because they’re all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.
You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.
EDIT: That was meant to be in reply to HungryHobo.
I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.
Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:
(1) All claims to truth should be carefully scrutinised for error.
(2) Claiming authority or pointing skyward to an authority is not a road to truth.
These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.
I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.
Can we agree that I am not trying to prosthelytize anyone?
No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.
(2) Claiming authority or pointing skyward to an authority is not a road to truth.
This is why you need to stop pointing to “Critical Rationalism” etc. as the road to truth.
I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.
First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not “not watering down the truth”. You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.
Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.
The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.
AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.
Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.
AI is blocked—you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.
The truth that curi and myself are trying to get across to people here is… it is the unvarnished truth… know far more about epistemology than you. That again is an unvarnished truth
In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?
Lots of people claim to have access to Truth—what makes you special?
If you want to debate that you need an epistemology which says what “knowledge” is. References to where you have that with full details to rival Critical Rationalism?
If you want to debate that you need an epistemology which says what “knowledge” is. References to where you have that with full details to rival Critical Rationalism?
Oh, get stuffed. I tried debating you and the results were… discouraging.
I feel the term “domain” is doing a lot of work in these replies. Define domain, what is the size limit of a domain? Might all of reality be a domain and thus a domain-specific algorithm be sufficient for anything of interest?
They chose a limited domain and then designed and used an algorithm that works in that domain – which constitutes domain knowledge. The paper’s claim is blatantly false; you are gullible and appealing to authority.
You sound less and less reasonable with every comment.
It doesn’t look like you conversion attempts are working well. Why do you think this is so?
Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.
I was aware of AlphaGo Zero before I posted—check out my link. Note that it can’t even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can’t learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.
Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.
AlphaGo Zero and AlphaZero are different things—check out my link.
In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it’s a universal knowledge creator?
If it could figure out the rules of any game that would be remarkable. That logic would also really help to find bugs in programs or beat the stock market.
If they wanna convince anyone it isn’t using domain-specific knowledge created by the programmers, why don’t they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can’t.
If it really has nothing domain specific, why can’t it work with ANY domain?
Chess
Go
Shogi
You’re describing what’s known as General game playing.
you program an AI which will play a set of games, you don’t know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.
This is in fact a field in AI.
also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it’s ancestor AlphaGoZero) hinting that they’re generalising it very successfully.
Here are some examples of domains other than game playing: architecture, chemistry, cancer research, website design, cryonics research, astrophysics, poetry, painting, political campaign running, dog toy design, knitting.
The fact that the self-play method works well for chess but not poetry is domain knowledge the programmers had, not something alphazero figured out for itself.
This again feels like one of those things that creeps the second anyone points you to examples.
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you’d just declare that those weren’t different enough domains because they’re all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
Nothing to see here everyone.
This is just yet another boring iteration of the forever shifting goalposts of AI .
Nothing to see here; just another boring iteration of the absurd idea of “shifting goalposts.”
There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.
Adam and Eve AI’s. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.
Rather than being designed to do X with yeast it’s basically told “go look at yeast” and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .
http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html
https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/
It’s a remarkable system and could be extremely useful for scientists in many sectors but it’s a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.
This AI is not a pianist robot and doesn’t play chess but has broad potential applications across many areas of science.
It blows a hole in the side of the “Universal Knowledge Creator” idea since it’s a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there’s some magical UKC line or category (which humans technically don’t qualify for yet anyway) is based on literally nothing except feelings. there’s not an ounce of logic or evidence behind it.
We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.
You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.
EDIT: That was meant to be in reply to HungryHobo.
I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.
Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:
(1) All claims to truth should be carefully scrutinised for error.
(2) Claiming authority or pointing skyward to an authority is not a road to truth.
These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.
I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.
No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.
This is why you need to stop pointing to “Critical Rationalism” etc. as the road to truth.
First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not “not watering down the truth”. You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.
Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.
The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.
AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.
Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.
AI is blocked—you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.
In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?
Lots of people claim to have access to Truth—what makes you special?
AlphaZero clearly isn’t general purpose. What are we even debating?
This sentence from the OP:
A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.
If you want to debate that you need an epistemology which says what “knowledge” is. References to where you have that with full details to rival Critical Rationalism?
Or are you claiming the OP is mistaken even within the CR framework..? Or do you have no rival view, but think CR is wrong and we just don’t have any good philosophy? In that case the appropriate thing to do would be to answer this challenge that no one even tried to answer: https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology
Oh, get stuffed. I tried debating you and the results were… discouraging.
Yes, I obviously think that CR is deluded.
I feel the term “domain” is doing a lot of work in these replies. Define domain, what is the size limit of a domain? Might all of reality be a domain and thus a domain-specific algorithm be sufficient for anything of interest?