Well, ask the question, should the bigger brain receive a million dollar, or do you not care?
jollybard
I’ve always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible?
An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you “could” input anything into your pocket calculator is due to the counterfactuals your brain can consider purely because it has some uncertainty about the world (an omniscient being could not make any choice at all! -- assuming complete omniscience is possible, which I don’t think it is, but let us imagine the universe as an omniscient being or something).
This leads me to believe that “anthropic binding” cannot be some kind of metaphysical primitive, since for it to be well-defined it needs to be considered by an embedded agent! Indeed, I claimed that recognizing algorithms “in the wild” requires the use of counterfactuals, and omniscient beings (such as “the universe”) cannot use counterfactuals. Therefore I do not see how there could be a “correct” answer to the problem of anthropic binding.
Fantastic work!
How do we express the way that the world might be carved up into different agent-environment frames while still remaining “the same world”? The dual functor certainly works, but how about other ways to carve up the world? Suppose I notice a subagent of the environment, can I switch perspective to it?
Also, I am guessing is that an “embedded” cartesian frame might be one where i.e. where the world is just the agent along with the environment. Or something. Then, since we can iterate the choice function, it ould represent time steps. Though we might in fact need sequences of agents and environments. Anyway, I can’t wait to see what you came up with.
There are two theorems. You’re correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar’s paradox, and then the second is proved by repeating the proof of the first internally.
However I chose to take the reverse route for a more epistemological flavour.
But we can totally prove it to be consistent, though, from the outside. Its sanity isn’t necessarily suspect, only its own claim of sanity.
If someone tells you something, you don’t take it at face value, you first verify that the thought process used to generate it was reliable.
You are correct. Maybe I should have made that clearer.
My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn’t even believe it if it claimed to be consistent).
Gödel Incompleteness: For Dummies
It’s essentially my jumping off point, though I’m more interested in the human-specific parts than he is.
Let There be Sound: A Fristonian Meditation on Creativity
The relevance that I’m seeing is that of self-fulfilling prophecies.
My understanding of FEP/predictive processing is that you’re looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.
More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.
Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.
I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb’s theorem as a degenerate case). I haven’t looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it’s very applied math-flavored and yet very theoretical.
Excellent post, it echoes much of my current thoughts.
I just wanted to point out that this is very reminiscent of Karl Friston’s free energy principle.
The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.
This is the logical induction I was thinking of.
Mammals and birds tend to grow, reach maturity, and stop growing. Conversely, many reptile and fish species keep growing throughout their lives. As you get bigger, you can not only defend yourself better (reducing your extrinsic mortality), but also lay more eggs.
So, clearly, we must have the same for humans. If we became progressively larger, women could carry twins and n-tuplets more easily. Plus, our brains would get larger, too, which could allow for a gradual increase in intelligence during our whole lifetimes.
Ha ha, just kidding: presumably intelligence is proportional to brain size/body size, which would remain constant, or might even decrease...
I’m not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether.
My feeling is that the arguments I give above are pretty decent reasons to think that they’re not truth values! As I wrote: “The thesis of this post is that probabilities aren’t (intuitionistic) truth values.”
Indeed, and -categories can provide semantics of homotopy type theory. But -categories are ultimately based on sets. At some point though maybe we’ll use HoTT to “provide semantics” to set theories, who knows.
In general, there’s a close syntax-semantics relationship between category theory and type theory. I was expecting to touch on that in my next post, though!
EDIT: Just to be clear, type theory is a good alternate foundation, and type theory is the internal language of categories.
Yes, I have! Girard is very… opinionated, he is fun to read for that reason. That is, Jean-Yves has some spicy takes:
Quantum logic is indeed a sort of punishment inflicted on nature, guilty of not yielding to the prejudices of logicians… just like Xerxes had the Hellespont – which had destroyed a boat bridge – whipped.
I enjoyed his book “Proofs and Types” as an introduction to type theory and the Curry-Howard correspondence. I’ve looked through “The Blind Spot” a bit and it also seemed like a fun read. Of course, you can’t avoid his name if you’re interested in linear logic (as I currently am), since the guy invented it.
Why Rationalists Shouldn’t be Interested in Topos Theory
That all makes more sense now :)
In our case the towel rack was right in front of the toilet, so it didn’t have to be an ambient thing haha
I just want to point out that you should probably change your towel at least every week (preferably every three uses), especially if you leave it in a high humidity environment like a shared bathroom.
I can’t even imagine the smell… Actually, yes I can, because I’ve had the same scenario happen to me at another rationalist sharehouse.
So, um, maybe every two months is a little bit too long.
A few obvious alternatives:
1. Everyone leave their towels in their room.
2. Guests leave their towels in their rooms. The common towels are put into a hamper every week, and the hamper goes to the laundry when it’s full.
3. Have fewer towels. Not the best solution since that doesn’t solve the problem of not having any towels while they’re being washed, but it could create more incentive to change them more often.This is definitely the sort of coordination problem that happens when you have a lot of people living together, but I also have a feeling that this should not happen at all, somehow. Like, in general, if this is like a hostel, then guests should behave as guests in a hostel, and the hostel itself should have people responsible for regular cleaning (this could be the permanent housemates). There is definitely a privacy and autonomy tradeoff at hostels.
Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don’t know how to formalize it.
It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.