This essay was co-written with Namespace, you can also read the essay on his blog here.
This post continues our exploration of truth. Here we begin to dig into the important parts of rationality, starting with the concept of Necessity and Necessary Beliefs.
Sure, agreed.
Sure agreed.
Yep
Why? This seems like a classic mistake of naive rationalism. Models exist for reasons, and brains have limits. Depending on my use case, It may make sense for me to have a set of heuristics I know don’t fit together becasue they’re useful.
To use an example from elsewhere in the post, tensile strength is a leaky abstraction compared to something like the universal wave function, but the former is much more useful for building bridges. Meanwhile, a firefighter is going to be using heuristics like “close to breaking” and “how much weight it can bear” or even a vague feeling of “danger” that is only a leaky abstraction of things like tensile strength.
Now, ‘should’ my understanding of all those pieces fit together? Not really, depends on what I want to use those models for.
Like Cicero, he was doing philosophy at a time when philosophy meant living a better life. He found suggestions farmed from different systems of thought, but which practically helped one live a better life, and helped form a foundation for stoicism, a set of tools and heuristics for living better. “Should” he have created a set of principles that were completely logically consistent? Well, it may have helped him generate more. But, in terms of a set of practical tools and thoughts that helped one live better, he did quite well curating using an eclectic style.
I think there are two issues here: 1) what are the right beliefs to have about life 2) what’s the right emotional attitude to life. You paint a picture of truth as a harsh destroyer of illusions, but why not describe it as a source of wonder / beauty / power / progress instead?
Tell someone they can fly, and they may be excited to learn how. Tell someone they can’t and they may be reluctant to believe you.
If we accept only what we want to believe* how will we:
Find the truth
Obtain the power/knowledge/etc. necessary to make things better?**
*This can go either way. If we want to be able to do things, then things being possible is great. If we want to do nothing, then things being impossible is great. (Or to make a better case: we may not want to believe
people are capable of doing terrible things, etc.
It is ‘possible’ to do terrible things (consider nukes, biorisk, etc. - AI risk may include the claim that ‘agency’ is not required to ‘do evil’.))
ETA:
**And what if some things can’t be improved?
If you don’t consider a device like a bike assistance, I think it is possible.
I was considering something like a bike ‘assistance’
A bike is a form of mechanical leverage assistance that allows us to reach speeds we would not otherwise be able to with the human body. Similarly, hang glider wings and the like are a form of mechanical leverage that allows us to fly when we would not otherwise be able to with the human body. Humans are kind of pathetic without technology, we can’t do so much as push a nail into a piece of wood without some form of mechanical assistance.
“You can’t design a bridge without actually knowing the tensile strength of steel and the compressive strength of concrete, these facts are not open to interpretation. Designing a society is no different [..]”
Distinguish necessity and sufficiency. There may be some objective truths that can be leveraged for social engineering, but it’s obvious that designing a society also involves solving questions outside the hard sciences, going from social psychology to ethics. You’re begging an enormous question there.
″ it’s obvious that designing a society also involves solving questions outside the hard sciences”
these questions should not be outside the hard sciences, that’s the point Alfred Korzybski was making all the way back in 1921. There’s no reason we shouldn’t be trying to treat ethics and psychology like hard sciences.
Unless there is. There are many theoretical arguments for why psychology and ethics cant be solved by the hard sciences, and there is a dearth or practical evidence that they can.
Simply stating such a controversial claim isn’t proof, and stating on Korzybski’s authority isn’t proof either.
I can see the basis of arguments that ethics could not be solved with hard science, I disagree with them but they at least have some basis. But psychology? Really? Are human beings not part of reality? Are human brains just magical boxes beyond our mortal comprehension? The hard problems of consciousness will be solved eventually. Cognitive neuroscience is making strong strides. Once we have a map of the connectome we’ll be well on our way to really understanding how brains work. Psychology should absolutely be treated as a hard science.
Now? When we the promissory note has not been cashed?
In any case, that is not the main problem. The main problem is that “X is a real thing in reality” doesn’t in any way guarantee that X is comprehensible to some entity Y. Slugs and stars are both real, but slugs can’t understand stars. In fact, they can’t understand slugs. We don’t know whether we are smart enough to understand ourselves.
And cognitive limitations aren’t the only problem. Epistemology has inherent problems, such as the problem of unfounded foundations, which can’t be solved by throwing Compute at them.
Dyson’s Law of Artificial Intelligence
”Anything simple enough to be understandable will not be complicated enough to behave intelligently, while anything complicated enough to behave intelligently will not be simple enough to understand.”
There’s Occam’s Guillotine and then there’s Ockham’s Guillotine. The latter is directly relevant to algorithmic information theory’s truth claims and, if funded to the tune of the billions it should be, will terminate the quasi-religious squabbling over social policy that threatens to turn into another Thirty Years War in the not-too-distant future.