Hmm, also, writing can be an excuse for generating ideas! The obvious thing to do would be to wait for good ideas to come into your head, then write them up for the world to see. But in my experience, writing and having my writing read boosts my ego, which somehow encourages my subconscious to throw up ideas which can be written up to derive yet more ego boosts. It’s a virtuous cycle. Which makes not writing because you can’t think up any good ideas a vicious one.
sark
Yes but they assess your blog mostly on its most recent posts. So you should just be out with it and improve anyway. This way you’ll always have the best audience your skills can currently get you.
It’s perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as “cosmic significance” or “only egotism exists”, by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force.
Also, it’s worth pointing out that there is no need for your values to be “logically consistent”. You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
Well, that was what in fact happened. But what could have happened was perhaps a nuclear war leading to “significant curtailment of humankind’s potential”.
cousin_it’s point was that perhaps we should not even begin the arms race.
Consider the Terminator scenario where they send the terminator back in time to fix things, but this sending back of the terminator is precisely what provided the past with the technology that will eventually lead to the cataclysm in the first place.
EDIT: included Terminator scenario
Hmm, I wonder if there is a bias in human cognition, which makes it easier for us to think of ever larger utilities/disutilities than of ever tinier probabilities. My intuition says the former, which is why I tend to be skeptical of such large impact small probability events.
Yes, the way we define ‘perfect Bayesian’ is unfair, but is this really a problem?
I tried to say that being irrational aids discovery.
If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.
Here is another example.
You are here relying on a definition of rational which excludes being good at coordination problems.
this sounds like hindsight bias to me
Au contraire, I think our pride in our ‘irrationality’ is where the hindsight bias is! Like you said we got lucky. This is OK if our ‘luck’ were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.
It’s entirely possible for our Bayesian to lose to you. It’s just improbable.
You expect that noisy ‘non-Bayesian’ exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a ‘rational’ agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it’s just stupid, and not a perfect Bayesian.
I don’t think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.
So, one possible bottleneck they missed:
Sparsity of solutions in the search space
Yeah. But it’s certainly possible for both to theories to be true. Morality is a pretty big umbrella term anyway. Also, evolution likes to exapt existing adaptations for other functions.
Even losers buy morality. This is OK since they are usually hypocritical enough not to employ it in important Near mode decisions. Costly morality is a true signal, not playing along with the signaling game signals… you are a loser. None of this is conscious of course, the directors weren’t deliberately trying to deceive the audience. But what they subconsciously end up doing benefits those who can afford the costly morality more than those who cannot.
They don’t really. Or if they do, with very much less urgency than when confronted with the possibility of being eaten by a tiger.
I’m reminded of movies where people in impossibly tough situations stick to impossibly idealistic principles. The producers of the movie want to hoodwink you into thinking they would stand by their luxurious morality even when the going gets tough. When the truth is, their adherence to such absurdly costly principles is precisely to signal that, compared to those who cannot afford their morality, they have it easy.
Pascal’s wager was a very detached and abstract theological argument. If Pascal’s heart rate did increase from considering the argument, it was from being excited about showing off his clever new argument, than from the sense of urgency the expected utility calculation was supposed to convey, and which he insincerely sold the argument with.
Meditating on my conversation with paulfchristiano below, I realize that our intuitive conception of intelligence is probably not coherent. The following is quite a controversial point, but I think a lot of what counts as intelligence was under sexual selection.
This paper shows how the best explanation for the genetic variance underlying intelligence is mutation-selection balance, instead of selective neutrality or balancing selection. Traits under mutation selection balance have high mutational target size, i.e. rare mutations of significant effect all over the genome affect their expression. This makes such a trait a very good fitness indicator, as it tells you the mutation load of an individual. Hence if you see an intelligent person, you can be reasonably certain that the person has fewer deleterious mutations. Physical beauty is another such fitness indicator.
Now, using physical beauty as an analogy, let’s say the symmetry of the face correlates with genetic quality. Then it will become a fitness indicator, as the opposite sex benefits from knowing the genetic quality of potential mates. This they experience as ‘beauty’. Now, symmetry of breasts also happen to correlate with genetic quality. How does natural selection make them appreciate this? Why since we already have a conception of beauty why not go with that? So facial and breast symmetry both fall under ‘beauty’ even though phenotypically they don’t really have much to do with each other. For one, they serve very different functions.
We should expect the same for our intuitive appraisal of the intelligence of others. Diverse mental tasks having no intrinsic relation to one another, happen to correlate with genetic quality, hence natural selection makes us perceive their aggregate as ‘intelligence’.
Buyer beware!
Oh yes, one can certainly train oneself to think more efficiently/effectively/creatively/etc. But this is not the same as improving intelligence. Think of it as using better software, instead of improving the hardware. But you can certainly think of this as improving intelligence, if you will, but then do realize that what you are doing is training a few key cognitive processes that happen to be useful in many domains. Which is to say, you won’t automatically be better at other mental tasks that don’t happen to require such cognitive processes.
Theories other than intelligence-as-synaptic-plasticity also don’t seem to allow improvement via training. This is because most of them hypothesize intelligence has something to do with the hardware of the brain. This is because the more diverse tests one aggregates, the more correlated the aggregated measure is with g. This together with the fact that tasks with high environmental variance have higher correlation with g, suggests that what aggregation does is cancel out environmental factors. This in turn strongly suggests that our notion of intelligence, or impression of someone’s intelligence, depends on a person’s overall mental ability over a wide range of tasks. We wouldn’t be impressed with a person who could multiply ten digit numbers if she does not also excel at a wide range of other mental tasks.
This is not to say that to be more intelligent, one has be better at everything. Because then why care for intelligence? One shouldn’t be too impressed with intelligence, because the whole point is to accomplish specific intellectual tasks no? Hence my suggestion in the first paragraph to identify cognitive processes influential in the performance of the intellectual tasks you care about.
Note that intelligence is a fitness indicator. We know this from psychological studies of sexual attraction and intelligence, from the fact that g has high genetic variance, from the fact that we haven’t found any genes which influences intelligence significantly. It is too easy to be impressed by intelligence and think that it can solve just about anything, without the training in the relevant intellectual tasks to go with it.
Well, what is training? Systematic repeated exposure right? And what this is supposed to do is to wire the brain in a certain way. But that first paper also suggests intelligence is something like synaptic plasticity, i.e. ability to learn. There just isn’t a mechanism which via training can improve synaptic plasticity.
I don’t mean give up on it long term, with future understanding we can certainly find a way to improve our own intelligence (but probably not via training). So I don’t see why I should say the same for AI or cognitive science.
Err no! He says that ‘real’ means something like causally accessible from where we are. It’s something like “from my perspective I am real, but from the perspective of a fictional-me in a fictional-universe, I am not, while the fictional me is real”. Except this is not a very helpful way to define ‘real’. There is no meta-realness, but relativistic-realness is quite as useless. Drescher dissolves the issue, by reducing ‘real’ to something like “whatever we can possibly get at from where we are in this universe”.
There is nothing to be disciplined or rigorous about when doing such a quote. What you see here is all there is to it. However, scholars might want you to think otherwise, by obfuscating their work, they can make it seem more impressive.
And what exactly does sink into them? What do they really learn? Would Chesterton agree with Robin Hanson that the explicit curricula is just subterfuge for ingraining in students obedience to authority?
And from a non cynical angle, this can be said of all learning. To be able to learn something, you have to have reasonably understood its prerequisites. So naturally, if you look at something you have just taught someone, it would seem like all you have managed to teach them was the assumptions.
According to this paper, those skills that most highly correlate with g are those with the lowest environmental variance. Working memory being the best illustration of this, its correlation so high that some researchers want to equate it to g.
According to this paper, genetic variance in intelligence is maintained by mutation-selection balance. This means it is a quantitative trait with a large number of tiny genetic factors influencing its overall value, making it a good fitness indicator. Hence we can think of intelligence as overall mental condition/health. It is unlikely that intelligence has any one underlying cause or mechanism, or even a few with large influence.
So you have two strategies for a good measure of intelligence, tasks with low environmental variance, tasks which tap diverse mental skills. Pretty much what the existing various IQ tests have set out to do.
As for success in various pursuits, I say rely on your overall assessment of the intelligence of the person. Of course, don’t forget creativity, discipline, drive, etc. which can be equally important. Beyond this, you’d have to go into the specific details of the particular pursuit, perhaps it requires specialized mental skills, quirky psychological profiles etc.
As for training intelligence, forget it. Even transfer of learning doesn’t work. You are best if you focused your training on specific skills integral to the tasks involved in achieving your goals.
But note that R&D, basic research, is unexpected in the sense that we as outsiders don’t know which narrowly focused group will succeed. It is very rare that when some group does succeed that it consists of undisciplined dilettantes pursuing research in an unfocused matter. So it’s a matter of not knowing which research goals have highest payoffs, instead of not knowing which goals you as a researcher are interested in pursuing.
Or think about it this way, the existing social epistemology setup already implements what is necessary to reap the rewards of curiosity on this larger scale. You as an individual researcher, should rather narrow your curiosity to what you are immediately working on.
Being mediocre makes you boring. I am all for interestingness. The optimal curiosity-focus balance for that is somewhere in between.
Excellent!
I would summarize what I think is the most essential insight of your comment as: ‘Curiosity is playful exploration. Chase is directed pursuit. Do not confuse the two’
However, you seem to be too big a fan of curiosity. Most of us intellectually curious types are probably too unconditionally curious for our own good. Your enthusiasm for your favorite novels is a good example. You admit it artificially cultivates in you a desire to know what will happen next, via clever plot trickery. Unfortunately reality and your goals are such that following your curiosity will not lead to information/knowledge with the highest payoff, especially in this modern technical environment, where our ancestrally-adapted curiosity heuristics probably go often astray. Following the smell of curiosity by your nose will lead you to ultimately learn about stuff irrelevant to your goals. It is highly unlikely that the marginally most interesting stuff leads in the direction of greatest marginal expected benefit of new knowledge/info for your achieving your goals. Effective goal pursuit requires crossing valleys of boredom.
I would say curiosity is an investment, and like all good investment it should be targeted, but when you really need/want to get something done, chase.
Hmm, I don’t happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.
But is this sufficient? You can model the statement “apples and oranges are good fruits” in predicate logic as “for all x, Apple(x) or Orange(x) implies Good(x)” or in propositional logic as “A and O” or even just “Z”. But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.
So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn’t a pleasant experience :(
It seems to me much simpler to simply answer: “Turing machine-ness has no bearing on moral worth”. This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.
Or further guess at the source of the confusion, the person was trying to think along the lines of: “Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don’t have moral worth… So humans don’t have moral worth! OH NOES!!!”
Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren’t usually very intuitively convincing.
(but I could be generalizing from one example here)