Sure: compartmentalisation is clearly an intellectual sin
Compartmentalisation is an “intellectual sin” in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models—and that in spite of many decades of hard effort, and a lot of ridiculous optimism.
So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.
reality is all one piece
And yet we our organize knowledge about reality into extremely complicated system of compartments.
Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.
but we’re running on corrupt hardware so due caution applies.
I don’t think our hardware is meaningfully “corrupt”. All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it’s not even mathematically possible by Rice Theorem etc.
Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources—otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they’re harmful.
The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.
I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Do you mean abandoning it completely, or abandoning it at all?
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective. It’s not a natural way for savannah apes to think, it’s incredibly difficult for most. But the payoff is ridiculously huge.
So we get quite excellent results if we decompartmentalise right. Reality does not appear to come in completely separate magisteria. If you want to form a map, that makes compartmentalisation an intellectual sin (which is what I meant).
By “appears to”, I mean that if we assume that reality—the territory—is all of a piece, and we then try to form a map that matches that territory, we get things like Facebook and enough food and long lifespans. That we have separate maps called physics, chemistry and biology is a description of our ignorance; if the maps contradict (e.g. when physics and chemistry said the sun couldn’t be more than 20 million years old and geology said the earth was at least 300 million years old [1]), everyone understands something is wrong and in need of fixing. And the maps keep leaking into each other.
This is keeping in mind the dangers of decompartmentalisation. The reason for bothering with it is an expected payoff in usefully superior understanding. People who know science works like this realise that a useful map is one that matches the territory, so decompartmentalise with wild abandon, frequently not considering dangers. And if you tell a group of people not to do something, at least a few will promptly do it. This does help explain engineer terrorists who’ve inadvertently decompartmentalised toxic waste and logically determined that the infidel must be killed. And why if you have a forbidden thread, it’s an obvious curiosity object.
The problem, if you want the results of science, is then not whether to decompartmentalise, but how and when to decompartmentalise. And that there may be dragons there.
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective.
But science itself is extremely compartmentalized! Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
Even microeconomics and macroeconomics make far better predictions when they’re separate, and repeated attempt at bringing them together consistently result in a disaster.
Don’t imagine that compartmentalization sets up impenetrable barriers once and for all—there’s a lot of cautious exchange between nearby compartments, and their boundaries keep changing all the time. I quite like the “compartments as scientific disciplines” image. You have a lot of highly fuzzy boundaries—like for example computer science to math to theoretical physics to quantum chemistry to biochemistry to medicine. But when you’re sick you don’t ask on programming reddit for advice.
The best way to describe a territory is to use multiple kinds of maps.
I don’t think anything you’ve said and anything I said actually contradict.
Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
What are the examples you’re thinking of, where both are right and said answers contradict, and said contradiction is not resolvable even in principle?
Upvoted. I think this is a useful way to think about things like this. Compartmentalizing and decompartmentalizing aren’t completely wrong, but are wrongly applied in different contexts. So part of the challenge is to convince the person you’re talking to that it’s safe to decompartmentalize in the realm needed to see what you are talking about.
For example, it took me quite some time to decompartmentalize on evolution versus biology because I had a distrust of evolution. It looked like toxic waste to me, and indeed has arguably generated some (social darwinism, e.g.). People who mocked creationists actually contributed to my sense of distrust in the early stages, given that my subjective experience with (young-earth) creationists was not of particularly unintelligent or gullible people. However this got easier when I learned more biology and could see the reference points, and the vacuum of solid evidence (as opposed to reasonable-sounding speculation) for creationism. Later the creationist speculation started sounding less reasonable and the advocates a bit more gullible—but until I started making the connections from evolution to the rest of science, there wasn’t reason for these things to be on my map yet.
I’m starting to think arguments for cryonics should be presented in the form of “what are the rational reasons to decompartmentalize (or not) on this?” instead of “just shut up and decompartmentalize!” It takes time to build trust, and folks are generally justifiably skeptical when someone says “just trust me”. Also it is a quite valid point that topics like death and immortality (not to mention futurism, etc.) are notorious for toxic waste to begin with.
ciphergoth and I talked about cryonics a fair bit a couple of nights ago. He posits that I will not sign up for cryonics until it is socially normal. I checked my internal readout and it came back “survey says you’re right” and nodded my head. I surmise this is what it will take in general.
(The above is the sort of result my general memetic defence gives. Possibly-excessive conservatism in actually buying an idea.)
So that’s your whole goal. How do you make cryonics normal without employing the dark arts?
I think some additional training in DADA would do me a lot of good here. That is, I don’t want to be using the dark arts, but I don’t want to be vulnerable to them either. And dark arts is extremely common, especially when people are looking for excuses to keep on compartmentalizing something.
A contest for bored advertising people springs to mind: “How would you sell cryonics to the public?” Then filter the results that use dark arts. This will produce better ideas than you ever dreamed.
The hard part of this plan is making it sound like fun for the copywriters. Ad magazine competition? That’s the sort of thing that gets them working on stuff for fun and kudos.
(My psychic powers predict approximately 0 LessWrong regulars in the advertising industry. I hope I’m wrong.)
(And no, I don’t think b3ta is quite what we’re after here.)
Compartmentalisation is an “intellectual sin” in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models—and that in spite of many decades of hard effort, and a lot of ridiculous optimism.
So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.
And yet we our organize knowledge about reality into extremely complicated system of compartments.
Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.
I don’t think our hardware is meaningfully “corrupt”. All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it’s not even mathematically possible by Rice Theorem etc.
Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources—otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they’re harmful.
The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.
I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Do you mean abandoning it completely, or abandoning it at all?
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective. It’s not a natural way for savannah apes to think, it’s incredibly difficult for most. But the payoff is ridiculously huge.
So we get quite excellent results if we decompartmentalise right. Reality does not appear to come in completely separate magisteria. If you want to form a map, that makes compartmentalisation an intellectual sin (which is what I meant).
By “appears to”, I mean that if we assume that reality—the territory—is all of a piece, and we then try to form a map that matches that territory, we get things like Facebook and enough food and long lifespans. That we have separate maps called physics, chemistry and biology is a description of our ignorance; if the maps contradict (e.g. when physics and chemistry said the sun couldn’t be more than 20 million years old and geology said the earth was at least 300 million years old [1]), everyone understands something is wrong and in need of fixing. And the maps keep leaking into each other.
This is keeping in mind the dangers of decompartmentalisation. The reason for bothering with it is an expected payoff in usefully superior understanding. People who know science works like this realise that a useful map is one that matches the territory, so decompartmentalise with wild abandon, frequently not considering dangers. And if you tell a group of people not to do something, at least a few will promptly do it. This does help explain engineer terrorists who’ve inadvertently decompartmentalised toxic waste and logically determined that the infidel must be killed. And why if you have a forbidden thread, it’s an obvious curiosity object.
The problem, if you want the results of science, is then not whether to decompartmentalise, but how and when to decompartmentalise. And that there may be dragons there.
Though Kelvin thought he could stretch the sun’s age to 500MY at a push.
But science itself is extremely compartmentalized! Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
Even microeconomics and macroeconomics make far better predictions when they’re separate, and repeated attempt at bringing them together consistently result in a disaster.
Don’t imagine that compartmentalization sets up impenetrable barriers once and for all—there’s a lot of cautious exchange between nearby compartments, and their boundaries keep changing all the time. I quite like the “compartments as scientific disciplines” image. You have a lot of highly fuzzy boundaries—like for example computer science to math to theoretical physics to quantum chemistry to biochemistry to medicine. But when you’re sick you don’t ask on programming reddit for advice.
The best way to describe a territory is to use multiple kinds of maps.
I don’t think anything you’ve said and anything I said actually contradict.
What are the examples you’re thinking of, where both are right and said answers contradict, and said contradiction is not resolvable even in principle?
Upvoted. I think this is a useful way to think about things like this. Compartmentalizing and decompartmentalizing aren’t completely wrong, but are wrongly applied in different contexts. So part of the challenge is to convince the person you’re talking to that it’s safe to decompartmentalize in the realm needed to see what you are talking about.
For example, it took me quite some time to decompartmentalize on evolution versus biology because I had a distrust of evolution. It looked like toxic waste to me, and indeed has arguably generated some (social darwinism, e.g.). People who mocked creationists actually contributed to my sense of distrust in the early stages, given that my subjective experience with (young-earth) creationists was not of particularly unintelligent or gullible people. However this got easier when I learned more biology and could see the reference points, and the vacuum of solid evidence (as opposed to reasonable-sounding speculation) for creationism. Later the creationist speculation started sounding less reasonable and the advocates a bit more gullible—but until I started making the connections from evolution to the rest of science, there wasn’t reason for these things to be on my map yet.
I’m starting to think arguments for cryonics should be presented in the form of “what are the rational reasons to decompartmentalize (or not) on this?” instead of “just shut up and decompartmentalize!” It takes time to build trust, and folks are generally justifiably skeptical when someone says “just trust me”. Also it is a quite valid point that topics like death and immortality (not to mention futurism, etc.) are notorious for toxic waste to begin with.
ciphergoth and I talked about cryonics a fair bit a couple of nights ago. He posits that I will not sign up for cryonics until it is socially normal. I checked my internal readout and it came back “survey says you’re right” and nodded my head. I surmise this is what it will take in general.
(The above is the sort of result my general memetic defence gives. Possibly-excessive conservatism in actually buying an idea.)
So that’s your whole goal. How do you make cryonics normal without employing the dark arts?
Hang out with cryonicists all the time!
Mike Darwin had a funny idea for that. :)
I think some additional training in DADA would do me a lot of good here. That is, I don’t want to be using the dark arts, but I don’t want to be vulnerable to them either. And dark arts is extremely common, especially when people are looking for excuses to keep on compartmentalizing something.
A contest for bored advertising people springs to mind: “How would you sell cryonics to the public?” Then filter the results that use dark arts. This will produce better ideas than you ever dreamed.
The hard part of this plan is making it sound like fun for the copywriters. Ad magazine competition? That’s the sort of thing that gets them working on stuff for fun and kudos.
(My psychic powers predict approximately 0 LessWrong regulars in the advertising industry. I hope I’m wrong.)
(And no, I don’t think b3ta is quite what we’re after here.)