People are overly impressed by things that animals can do such as dogs opening doors and think the only explanation is that they must be learning. Conversely, people think children being good at something means they have an in-born natural talent. The child is doing something way more remarkable than the dog but does not get to take credit. The dog does.
Fallibilist_duplicate0.16882559340231862
I would be happy to rewrite the first line to say: An entity is either a UKC or it has zero—or approximately zero—potential to create knowledge. Does that help?
Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:
(1) All claims to truth should be carefully scrutinised for error.
(2) Claiming authority or pointing skyward to an authority is not a road to truth.
These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.
I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.
he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.
Our human ancestors on the African savannah could not construct a nuclear reactor, nor the skyline of Manhattan, nor an 18 core microprocessor. They had no idea how. But they had in them the potential and that potential has been realized today. To do that, we created deep knowledge about how our universe works. Why you think that is not going to continue? Why should we not be able to construct a von Neumann probe at some point in the future? Note that most of the advances I am talking about occurred in the last few hundred years. Humans had a big problem with static memes preventing progress for millennia (see BoI). If not for those memes, we may well be at the stars by now. While humans made all this progress, dolphins and border collies did what?
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you’d just declare that those weren’t different enough domains because they’re all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.
You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.
EDIT: That was meant to be in reply to HungryHobo.
Critical Rationalists think that E. T. Jaynes is confused about a lot of things. There has been discussion about this on the Fallible Ideas list.
https://www.youtube.com/watch?v=0KmimDq4cSU
Everything he says in that video is in accord with CR and with what I wrote about how we acquire knowledge. Note how the audience laughs when he says you start with a guess. What he says is in conflict with how LW thinks the scientific method works (like in the Solomonoff guide I referenced).
FYI, Feynman was a critical rationalist.
Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term “Observer” as used in physics) …
You are indirectly echoing ideas that come from David Deutsch. FYI, Deutsch is a proponent of the Many Worlds Explanation of quantum physics and he invented the idea of the universal quantum computer, founding quantum information theory. He talks about them in BoI.
The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it’s claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)
Deutsch gives arguments that people are universal explainers/constructors (this requires that they be computationally universal as well). What is your argument that there are some things that a universal explainer could never be able to understand? Alternatively, what is your argument that people are not universal explainers? Deutsch talks about the “reach” of knowledge. Knowledge created to solve a problem in one domain can solve problems in other domains too. What is your argument that the knowledge we create could never reach into this inexplicable realm you posit?
Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.
I was aware of AlphaGo Zero before I posted—check out my link. Note that it can’t even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can’t learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.
As I explained in the post, dog genes contain behavioural algorithms pre-programmed by evolution. The algorithms have some flexibility—akin to parameter tuning—and the knowledge contained in the algorithms is general purpose enough so it can be tuned for dogs to do things like open boxes. So it might look like the book is learning something but the knowledge was created by biological evolution, not the individual dog. The knowledge in the dog’s genes is an example of what Popper calls knowledge without a knowing subject. Note that all dogs have approximately the same behavioural repertoire. They are kind of like characters in a video game. Some boxes a dog will never open, though a human will learn to do it.
A child is a UKC so when a child learns to open a box, the child creates new knowledge afresh in their own mind. It was not put there by biological evolution. A child’s knowledge of box-opening will grow, unlike a dog’s, and they will learn to open boxes in ways a dog never can. And different children can be very different in terms of what they know how to do.
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
Please quote me accurately. What I wrote was:
AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have
I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.
In CR, knowledge is information which solves a problem. CR criticizes the justified-true-belief idea of knowledge. Knowledge cannot be justified, or shown to be certain, but this doesn’t matter for if it solves a problem, it is useful. Justification is problematic because it is ultimately authoritarian. It requires that you have some base, which itself cannot be justified except by an appeal to authority, such as the authority of the senses or the authority of self-evidence, or such like. We cannot be certain of knowledge because we cannot say if an error will be exposed in the future. This view is contrary to most people’s intuition and for this reason they can easily misunderstand the CR view, which commonly happens.
CR accepts something as knowledge which solves a problem if it has no known criticisms. Such knowledge is currently unproblematic but may become so in the future if an error is found.
Critical rationalists are fallibilists: they don’t look for justification, they try to find error and they accept anything they cannot find an error in. Fallibilists, then, expose their knowledge to tough criticism. Contrary to popular opinion, they are not wish-washy, hedging, or uncertain. They often have strong opinions.
Note the “There is no such thing as a partially universal knowledge creator.”. That means an entity either is a UKC or it has no ability, or approximately zero ability, to create knowledge. Dogs are in the latter bucket.
My intent was to summarise the CR view on AI. I’ve providing links so you can read more.
EDIT: BTW I disagree that I have made “a bunch of assertions”. I have provided arguments, for example, about induction. I suspect, also, that you think observation—or evidence—comes first and I have argued against that.
I am summarizing a view shared by other Critical Rationalists, including Deutsch. Do you think they are confused too?
Have added in some sub-headings—if that helps.
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen).
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.
The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.
AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.
Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.
AI is blocked—you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.