I dunno, isn’t this just a nerdy version of numerology?
gurugeorge
I think for non-elites it’s about the same. It depends on how you conceive “ideas” of course—whether you restrict the term purely to abstractions, or broaden it to include all sorts of algorithms, including the practical.
Non-elites aren’t concerned with abstractions as much as elites, they’re much more concerned with practical day-to-day matters like raising a family, work, friends, entertainment, etc.
Take for instance DIY videos on Youtube—there are tons of them nowadays, and that’s an example of the kind of thing that non-elites (and indeed elites to the extent that they might actually care about DIY) are going to benefit from tremendously. And I think it’s going to be natural for a non-elite individual to check out a few (after all it’s pretty costless, except in terms of a tiny bit of time) and sift out what seem like the best methods.
It could be that, like sleep, the benefits of reading fiction aren’t obvious and aren’t on the surface. IOW, escapism might be like dreaming—a waste from one point of view (time spent) but still something without which we couldn’t function properly, so therefore not a waste, but a necessary part of maintenance, or summat.
What happens if it doesn’t want to—if it decides to do digital art or start life in another galaxy?
That’s the thing, a self-aware intelligent thing isn’t bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn’t a big problem.
I can’t remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it’s simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world’s resources and scientists onto it. But who would pay for such a thing?
What’s the ROI for a super-intelligent, self-aware machine? Not very much, I should think—especially considering the potential dangers.
So yeah, we’ll certainly produce machines like the robots in Interstellar—clever expert systems with a simulacrum of self-awareness. Because there’s money in it.
But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.
If there’s any kernel to the concept of rationality, it’s the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.
“Ratio” = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.
(As I understand it, Bayes is the method of “proportioning beliefs to evidence” par excellence.)
Great stuff! As someone who’s come to all this Bayes/LessWrong stuff quite late, I was surprised to discover that Scott Alexander’s blog is one of the more popular in the blogosphere, flying the flag for this sort of approach to rationality. I’ve noticed that he’s liked by people on both the Left and the Right, which is a very good thing. He’s a great moderating influence and I think he offers a palatable introduction to a more serious, less biased way of looking at the world, for many people.
I think the concept of psychological neoteny is interesting (Google Bruce Charlton neoteny) in this regard.
Roughly, the idea would be that some people retain something of the plasticity and curiosity of children, whereas others don’t, they mature into “proper” human beings and lose that curiosity and creativity. The former are the creative types, the latter are the average human type.
There are several layered ironies if this is a valid notion.
Anyway, for the latter type, they really do exhaust their interests in maturity, they stick to one career, their interests are primarily friends and family, etc., so it’s easy to see how for them, life might be “done” at some point. For geeks, nerds, artists, and probably a lot of scientists too, the curiosity never ends, there’s always interest about what happens next, what’s around the corner, so for them, the idea of life extension and immortality is a positive.
All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration (“wiring”) it has.
There are still subjective perceived qualities of objects though—e.g. illusory (e.g.like Müller-Lyer, etc., but not “illusions” like the famous “bent” stick in water, that’s a sensory experience), pleasant, inspiring, etc.
I’m calling “sensory” here the experience (perturbation of one’s being) itself, “perception” the interpretation of it (i.e. hypothetical projection of a cause of the perturbation outside the perturbation itself). Of course in doing this I’m “tidying up” what is in ordinary language often mixed (e.g. sometimes we call sensory experiences as I’m calling them “perceptions”, and vice-versa). At least, there are these two quite distinct things or processes going on, in reality. There may also be caveats about at what level the brain leaves off sensorily receiving and starts actively interpreting perception, not 100% sure about that.
Yes, for that person. Remember, we’re not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person—i.e. it’s an objective quality of the lemon for that person.
Or to put it another way, the lemon is consistently “giving off” the same set of causal effects that produce in one person “tart”, another person “sweet”.
The initial oddness arises precisely because we think “sweetness” must itself be an intrinsic quality of something, because there’s several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.
Sweetness isn’t an intrinsic property of the thing, but it is a relational property of the thing—i.e. the thing’s sweetness comes into existence when we (with our particular characteristics) interact with it. And objectively so.
It’s not right to mix up “intrinsic” or “inherent” with objective. They’re different things. A property doesn’t have to be intrinsic in order to be objective.
So sweetness isn’t a property of the mental model either.
It’s an objective quality (of a thing) that arises only in its interaction with us. An analogy would be how we’re parents to our children, colleagues to our co-workers, lovers to our lovers. We are not parents to our lovers, or intrinsically or inherently parents, but that doesn’t mean our parenthood towards our children is solely a property of our childrens’ perception, or that we’re not really parents because we’re not parents to our lovers.
And I think Dennett would say something like this too; he’s very much against “qualia” (at least to a large degree, he does allow some use of the concept, just not the full-on traditional use).
When we imagine, visualize or dream things, it’s like the activation of our half of the interaction on its own. The other half that would normally make up a veridical perception isn’t there, just our half.
Hmm, but isn’t this conflating “learning” in the sense of “learning about the world/nature” with “learning” in the sense of “learning behaviours”? We know the brain can do the latter, it’s whether it can do the former that we’re interested in, surely?
IOW, it looks like you’re saying precisely that the brain is not a ULM (in the sense of a machine that learns about nature), it is rather a machine that approximates a ULM by cobbling together a bunch of evolved and learned behaviours.
It’s adept at learning (in the sense of learning reactive behaviours that satisfice conditions) but only proximally adept at learning about the world.
Great stuff, thanks! I’ll dig into the article more.
I’m not sure what you mean by gerrymandered.
What I meant is that you have sub-systems dedicated to (and originally evolved to perform) specific concrete tasks, and shifting coalitions of them (or rather shifting coalitions of their abstract core algorithms) are leveraged to work together to approximate a universal learning machine.
IOW any given specific subsystem (e.g. “recognizing a red spot in a patch of green”) has some abstract algorithm at its core which is then drawn upon at need by an organizing principle which utilizes it (plus other algorithms drawn from other task-specific brain gadgets) for more universal learning tasks.
That was my sketchy understanding of how it works from evol psych and things like Dennett’s books, Pinker, etc.
Furthermore, I thought the rationale of this explanation was that it’s hard to see how a universal learning machine can get off the ground evolutionarily (it’s going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve (“need to know” principle), and it’s easier to later get an approximation of a universal machine off the ground on the back of shifting coalitions of them.
That’s a lot to absorb, so I’ve skimmed it, so please forgive if responses to the following are already implicit in what you’ve said.
I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?
If the brain were naturally a universal learner, then surely we wouldn’t have to learn universal learning (e.g. we wouldn’t have to learn to overcome cognitive biases, Bayesian reasoning wouldn’t be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.
I think there’s always been something misleading about the connection between knowledge and belief. In the sense that you’re updating a model of the world, yes, “belief” is an ok way of describing what you’re updating. But in the sense of “belief” as trust, that’s misleading. Whether one trusts one’s model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.
IOW, knowledge is not a modification of a psychological state, it’s the actual, objective status of an “aperiodic crystal” (sequences of marks, sounds, etc) as filtered via public habits of use (“interpretation” in more of the mathematical sense) to be representational. IOW there are 3 components, the sequence of scratches, the way the sequence of scratches is used (usually involving interaction with the world, implicitly predicting the world will react a certain way conditional upon certain actions), and the way the world is. None of those involve belief.
So don’t worry about belief. Take things lightly. Except on relatively rare mission-critical occasions, you don’t need to know, and as Feynman typically wisely pointed out, it’s ok not to know.
That thing of lurching from believing in one thing as the greatest thing since sliced bread, to another, I’m familiar with, but at some point, you start to see that emotional roller-coaster as unnecessary.
So it’s not gullibility, but lability (labileness?) that’s the key. Like the old Zen master story “Is that so?”:-
“The Zen master Hakuin was praised by his neighbours as one living a pure life. A beautiful Japanese girl whose parents owned a food store lived near him. Suddenly, without any warning, her parents discovered she was with child. This made her parents angry. She would not confess who the man was, but after much harassment at last named Hakuin. In great anger the parent went to the master. “Is that so?” was all he would say.
“After the child was born it was brought to Hakuin. By this time he had lost his reputation, which did not trouble him, but he took very good care of the child. He obtained milk from his neighbours and everything else he needed. A year later the girl-mother could stand it no longer. She told her parents the truth—the real father of the child was a young man who worked in the fishmarket. The mother and father of the girl at once went to Hakuin to ask forgiveness, to apologize at length, and to get the child back. Hakuin was willing. In yielding the child, all he said was: “Is that so?”
I remember reading a book many years ago which talked about the “hormonal bath” in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it’s necessary but not sufficient).
This ties in with the philosophical position of Externalism (I’m very much into the Process Externalism of Riccardo Manzotti). The “thinking unit” is really the whole body—and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognition being the peak of a pyramid that has roots that go all the way through the whole).
I’m as intrigued and hopeful about the possibility of uploading, etc., as the next nerd, but this sort of stuff has always led me to be cautious about the prospects of it.
There may also a lot more to be discovered about the brain and body too, in the area of some connection between the fascia and the immune system (cf. the anecdotal connection between things like yoga and “internal” martial arts and health).
Oh, true for the “uploaded prisoner” scenario, I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
But even for the “uploaded prisoner”, given sufficient time it would be possible—there’s no absolute impermeability to information anywhere, is there? And where there’s information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
Isn’t suicide always an option? When it comes to imagining immortality, I’m like Han Solo, but limits are conceivable and boredom might become insurmountable.
The real question is whether intelligence has a ceiling at all—if not, then even millions of years wouldn’t be a problem.
Charlie Brooker’s Black Mirror tv show played with the punishment idea—a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the “gaoler” for amusement. Hellish scenario.
(In another episode- or it might be the same one? - a version of the same kind of “punishment”—except just a featureless white space for a few years—is also used to “tame” a copy of a person’s mind that’s trained to be a boring virtual assistant for the person.)
Thanks for the heads up, never heard of this guy before but he’s very good and quite inspiring for me where I’m at right now.