Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, “infinitely small” factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?
halcyon
No, he’s right. I didn’t think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.
Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.
Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won’t be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs.
(That is the context in which this came up. The specific situation is more technically convoluted.)
Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.
My off the cuff impression is that this probably won’t lead to genuinely new math. In the most basic case, all it does is move the integrations into the powers that other stuff is raised by. But if we somehow end up with complicated patterns of logarithms and exponentiations, like if that other stuff itself involves calculus and so on, then who knows? Is there a standard name for this operation?
I don’t see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps “physical” and “perceived” qualities would be more specific than “primary” and “secondary” qualities.
Regarding your question, if the “1->2 and 1->3” theory is accurate, then I suppose when we say that “red is more like violet than green”, certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C.
So it follows that “But, by definition of epiphenomenalism, it’s not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B.” is true, but “But now by our theory of reference, subjective-red is B, rather than A.” is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2.
Regarding S2, I think there is an equivocation between different definitions of the word “subjective”. This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of “subjective experience” is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using.
So in the end, it still comes down to a mereological fallacy, but not the ones that non-physicalists would prefer we end up with. Does that make sense?
This is an interesting example, actually. Do we have data on how universal perceptions of color similarities, etc. are? We find entire civilizations using some strange analogies in the historical record. For example, in the last century, the Chinese felt they were more akin to Russia than the West because the Russians were a land empire, whereas Westerners came via the sea like the barbaric Japanese who had started the Imjin war. Westerners had employed similar strong arm tactics to the Japanese, forcing China to buy opium and so on. Personally, I find it strange to base an entire theory of cultural kinship on the question of whether one comes by land or sea, but maybe that’s just me.
I don’t think epiphenomenalists are using words like “experience” in accordance with your definition. I’m no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions.
EY’s point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 → 2 → 3, are other causal diagrams not possible, such as 1 → 2 and 1 → 3, mediated by the human cognitive architecture? (Or maybe even 1 → 3 → 2 in some cases, where we perceive something on the basis of having acted in certain ways.)
However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn’t explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that’s not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?
In the Less Wrong Sequences, Eliezer Yudkowsky argues against epiphenomenalism on the following basis: He says that in epiphenomenalism, the experience of seeing the color red fails to be a causal factor in our behavior that is consistent with us having seen the color red. However, it occurs to me that there could be an alternative explanation for that outcome. It could be that the human cognitive architecture is set up in such a way that light in the wavelength range we are culturally trained to recognize as red causes both the experience of seeing the color as well as actions consistent with seeing it. After the research which shows that we decide to act before becoming conscious of our decision, such a setup would not be a surprise to me if true.
Thanks. You’re right, that part should be expanded. How about:
At this point, you have two choices: Either 1. one randomly selected door, or 2. one door among two doors, chosen by the host on the basis of the other not having the prize.
You would have better luck with option 2 because choosing that door is as good as opening two randomly selected doors. That is twice as good as opening one randomly selected door as in option 1.
A better explanation of the Monty Hall problem:
A game show host always plays the following game: First he shows you 3 doors and informs you there is a prize behind one of them. After allowing you to select one of the doors, he throws open one of the other doors, showing you that it’s empty. He then offers you a deal: Stick to your original guess, or switch to the remaining door?
What is the most important piece of information in this problem statement? I claim that the bit that ought to shock you is that the host plays this game all the time, and the door he throws open ALWAYS turns out to be empty. Think about it: If the host randomly throws open a door, then in every third show, the door he opens would have the prize behind it. That would ruin the game!
The host knows which door has the prize, and in order not to lose the interest of the spectators, he deliberately opens an empty door every time. What this means is that the door you chose was selected randomly, but the door that the host DIDN’T choose is selected on the basis of a predictable algorithm. Namely, having the prize behind it.
This is the real reason why you would do better if you switched your guess to the remaining door.
What do you think? Is that clearer than the usual explanations?
Thanks. The next thing I was going to say is that the intuitionistic strategy of neutrality with regard to affirming or negating propositions in worlds until proof comes along roughly (i.e. in a sense to be argued for later) differentiates the classical and intuitionistic approaches like so:
The classical approach is good for having one “world” description that is almost certainly inaccurate. This can be gradually updated, making it represent one map.
The intuitionistic approach is good for having multiple world descriptions that are almost certainly incomplete. Their contours are filled in as more information becomes available and rejected as inaccurate when they lead to contradictions, making each one a holistic representation of a possible territory. (Shoehorning the same approach into classical logic is possible, but you have to create a set of conventions to do so. These conventions are not universal, making the approach less natural.)
Something like that anyway, but Shramko 2012 has put a lot more thought into this than I have: http://kdpu.edu.ua/shramko/files/2012_Logic_and_Logical_Philosophy_What_is_a_Genueny_Intuitionistic_Notion_of_Falsity.pdf I defer to expert opinion here.
I don’t see how distinguishing between deductive and inductive reasoning is mutually exclusive with the map/description distinction. That is to say, you could have each of the following combinations: deductive map, deductive description, inductive map, and inductive description.
Edit: On second thought, I see what you were saying. Thanks, I will think about it.
Thanks.
Regarding falsehood: I would say that intuitionistic logic ejects falsehood from its formal system in the specific sense mentioned in my link. I could dig up more references if you want me to. I agree that there are many reasonable interpretations in which it does not do so, but I don’t think those interpretations are relevant to my point. I only intended to argue that proof by contradiction is the strategy of correcting a map as opposed to describing a territory.
Regarding mapping versus description: I agree that my motivations were semantic rather than syntactic. I just wanted to know whether the idea I had made sense to others who know something of intuitionistic logic. I guess I have my answer, but for the sake of clarifying the sense I was going for, here’s the example I posted below:
Suppose you have a proposition like, “There is a red cube.” Next, you learn that this proposition leads to a contradiction. You could say one of two things:
This proves there is no red cube.
This means the context in which that proposition occurs is erroneous.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
Thank you for the response.
Regarding errors: It’s not that intuitionism never turns up errors. It’s that the classical approach incorporates the concept of error within the formal system itself. This is mentioned in the link I gave. There are two senses here:
Falsehood is more tightly interwoven in the formal system when following the classical approach.
Errors are more integral to the process of comparing maps to territories than the description of territories in themselves.
It is possible that these two senses are not directly comparable. My question is: How meaningful is the difference between these two senses?
Regarding subsets: It is true that intuitionism is often regarded as the constructive subset of classical mathematics, but intuitionists argue that classical mathematics is the proper subset of intuitionistic mathematics where proof by contradiction is valid. I’m basically paraphrasing intuitionistic mathematicians here.
This (i.e. subsets thing) is not intended as an irrefutable argument. It is only intended to extend the correspondence. After all, if either classical or intuitionistic approaches can be used as a foundation for all of mathematics, then it stands to reason that the other will appear as a proper subset from the foundational perspective of either.
Edit: This doesn’t add any new information, but let me give an example for the sake of vividness. Suppose you have a proposition like, “There is a red cube.” Next, you learn that this proposition leads to a contradiction. You could say one of two things:
This proves there is no red cube.
This means the context in which that proposition occurs is erroneous.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
Thanks. Yes, I was thinking of an AI that is both superintelligent and technically Friendly, but about log(x)^10 of the benefit from the intelligence explosion is actually received by humans. The AI just sets up its own cult and meditates for most of the day, thinking of how to wring more money out of its adoring fans. Are there ways to set up theoretical frameworks that avoid scenarios vaguely similar to that? If so, how?
Edit: A close reading of Shramko 2012 has resolved my confusion. Thanks, everyone.
I can’t shake the idea that maps should be represented classically and territories should be represented intuitionistically. I’m looking for logical but critical comments on this idea. Here’s my argument:
Territories have entities that are not compared to anything else. If an entity exists in the territory, then it is what it is. Territorial entities, as long as they are consistently defined, are never wrong by definition. By comparison, maps can represent any entity. Being a map, these mapped entities are intended to be compared to the territory of which it is a map. If the territory does not have a corresponding entity, then that mapped entity is false insofar as it is intended as a map.
This means that territories are repositories of pure truth with no speck of falsehood lurking in any corner, whereas maps represent entities that can be true or false depending on the state of the territory. This corresponds to the notion that intuitionism captures the concept of truth. If you add the concept of falsehood or contradiction, then you end up with classical logic or mathematics respectively. First source I can think of: https://www.youtube.com/playlist?list=PLt7hcIEdZLAlY0oUz4VCQnF14C6VPtewG
Furthermore, the distinction between maps and territories seems to be a transcendental one in the Kantian sense of being a synthetic a priori. That is to say, it is an idea that must be universally imposed on the world by any mind that seeks to understand it. Intuitionism has been associated with Kantian philosophy since its inception. If The Map is included in The Territory in some ultimate sense, that neatly dovetails with the idea of intuitionists who argue that classical mathematics is a proper subset of intuitionistic mathematics.
In summary, my thesis states that classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system. In contrast, intuitionistic logic is the logic of describing a territory without seeking to compare it to something else. Intuitionistic type theory turns up type errors, for example, when such a description turns out to be inconsistent in itself.
Where did I take a wrong turn?
Thanks. In my imagination, the AI does some altruistic work, but spends most of its resources justifying the total expenditure. In that way, it would be similar to cults that do some charitable work, but spend most of their resources brainwashing people. But “rogue lawyer” is probably a better analogy than “cult guru” because the arguments are openly released. The AI develops models of human brain types in increasingly detailed resolutions, and then searches over attractive philosophies and language patterns, allowing it to accumulate considerable power despite its openness. It shifts the focus to justifiability only because it discovers that beyond a certain point, finding maximally justifiable arguments is much harder than being altruistic, and justifiability is its highest priority. But it always finds the maximally justifiable course of action first, and then takes that course of action. So it continues to be minimally altruistic throughout, making it a cult guru that is so good at its work it doesn’t need to use extreme tactics. This is why losing the AI is like exiting a cult, except the entire world of subjective meaning feels like a cult ideology afterwards.
An idea for a failed utopia: Scientist creates an AI designed to take actions that are maximally justifiable to humans. AI behaves as a rogue lawyer spending massive resources crafting superhumanly elegant arguments justifying the expenditure. Fortunately, there is a difference between having maximal justifiability as your highest priority and protecting the off button as your highest priority. Still a close shave, but is it worth turning off what has literally become the source of all the meaning in your life?
I found an interesting paper on a Game-theoretic Model of Computation: https://arxiv.org/abs/1702.05073
I can’t think of any practical applications yet. (I mean, do silly ideas like a game-theoretic “programming language” count as practical?)
I’m not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don’t have direct causal access to for reasons that are similar to the reasons that we talk about free will.
Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I’m hoping it will contradict one of your preconceptions.)