When I was younger...
MrMind
It also happens to me when I got to solve a problem that many have and realize in retrospetct that it was a combination of luck, knowing the right people and skills that you don’t know how to transfer, possibly because they are genetic traits. It must be frustrating to hear, after a question like “how have you conquered your social anxiety?”, the condensed answer “mostly luck”
On the other hand, it makes you think when you realized how much these kinds of social status booster have permeated every step of the hierarchical ladder of any large organization… and yet, somehow, things still work out
There is, at least at a mathematical / type theoretic level.
In intuitionistic logic, is translated to , which is the type of processes that turn an element of into an element of , but since is empty, the whole is absurd as long as is istantiated (if not, then the only member is the empty identity). This is also why constructively but not
Closely related to constructive logic is topology, and indeed if concepts are open set, the logical complement is not a concept. Topology is also nice because it formalizes the concept of edge case
One thing to remember when talking about distinction/defusion is that it’s not a free operation: if you distinguish two things that you previously considered the same, you need to store at least a bit of information more than before. That is something that demands effort and energy. Sometimes, you need to store a lot more bits. You cannot simply become superintelligent by defusing everything in sight.
Sometimes, making a distinction is important, but some other times, erasing distinctions is more important. Rationality is about creating and erasing distinctions to achieve a more truthful or more useful model.
This is also why I vowed to never object that something is “more complicated” if I cannot offer a better model, because it’s always very easy to inject distinctions, the harder part is to make those distinctions matter.
I don’t think you need the concept of evidence. In Bayesian probability, the concept of evidence is equivalent to the concept of truth; both in the sense that P(X|X) = 1, whatever you consider evidence is true, but also P(X) = 1 --> P(A /\ X) = P(A|X), you can consider true sentences as evidence without changing anything else.
Add to this that good rationalist practice is to never assume that anything is P(A) = 1, so that nothing is actually true or actually an evidence. You can do epistemology exclusively in the hypotethical: what happens if I consider this true? And then derive consequences.
Well, I share the majority of your points. I think that in 30 years millions of people will try to relocate in more fertile areas. And I think that not even the firing of the clathrate gun will force humans to coordinate globally. Although I am a bit more optimist about technology, the actual status quo is broken beyond repair
The fact is surprising when coupled with the fact that particles do not have a definite spin direction before you measure it. The anti-correlation is maintained non-locally, but the directions are decided by the experiment.
A better example is: take two spheres, send them far away, then make one sphere spin in any orientation that you want. How much would you be surprised to learn that the other sphere spins with the same axis in the opposite directions?
How probable is that someone knows their internal belief structure? How probable is that someone who knows their internal belief structure tells you that truthfully instead of using a self-serving lie?
The causation order in the scenario is important. If the mother is instantly killed by the truck, then she cannot feel any sense of pleasure after the fact. But if you want to say that the mother feels the pleasure during the attempt or before, then I would say that the word “pleasure” here is assuming the meaning of “motivation”, and the points raised by Viliam in another comment are valid, it becomes just a play on words, devoid of intrinsic content.
So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I’m not quite sure of the official answer to that question.
On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on supports the atomic elements.
And if you’re willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities
Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.
Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70′s, so it dates at least forty years back.
That said, I’m not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning
Yeah, my point is that they aren’t truth values per se, not intuitionistic or linear or MVs or anything else
I’ve also dabbled into the matter, and I have two observation:
I’m not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether. Sure, operations on truth values should turn into operations on probabilities, but their underlying logic is different (probabilities after all should be measures, while truth values are algebras)
While 0 and 1 are not (good) epistemic probabilities, they are of paramount importance in any model of probability. For example, P(X|X) = 1, so 0⁄1 should be included in any model of probability
The way it’s used in the set theory textbooks I’ve read is usually this:
define a function successor on a set S:
assume the existence of an inductive set that contains a set and all its successors. This is a weak and very limited form of infinite induction.
Use Replacement on the inductive set to define a general form of transfinite recursion.
Use transfinite recursion and the union operation to define the step “taking the limit of a sequence”.
So, there is indeed the assumption of a kind of infinite process before the assumption of the existence of an infinite set, but it’s not (necessarily) the ordinal . You can’t also use it to deduce anything else, you still need Replacement. The same can be said for the existence and uniqueness of the empty set, which can be deduced from the axioms of Separation.
This approach is not equivalent nor weaker to having fiat transfinite recursion , it’s the only correct way if you want to make the least amount of new assumptions.
Anyway, as far as I can tell, having a well defined theory of sets is crucial to the definitions of surreals, since they are based on set operations and ontology, and use infinite sets of every kind.
On the other hand, I don’t understand your problem with the impredicativity of the definitions of the surreals. These are often resolved into recursive definitions and since ZF-sets are well-founded, you never run into any problem.
> Transfinite induction does feel a bit icky in that finite prooflines you outline a process that has infinitely many steps. But as limits have a similar kind of thing going on I don’t know whether it is any ickier.
Well, transfinite induction / recursions is reduced to (at least in ZF set theory) the existence of an infinite set and the Replacement axioms (a class function on a set is a set). I suspect you don’t trust the latter.
The first link in the article is broken...
Obviously, only the wolves that survive.
Beware of the selection bias: even if veterans show more productivity, it could just be because the military training has selected those with higher discipline
The diagram at the beginning is very interesting. I’m curious about the arrow from relationship to results… care to explain? It refers to joint works or collaborations?
On the other hand, it’s not surprising to me that AI alignment is a field that requires much more research and math than software writing skills… the field is completely new and not very well formalized yet, probably your skill set is misaligned with the need of the market
I should have written “algebraic complement”, which becomes logical negation or set-theoretic complement depending on the model of the theory.
Anyway, my intuition on why open sets are an interesting model for concepts is this: “I know when I see it” seems to describe a lot of the way we think about concepts. Often we don’t have a precise definition that could argue all the edge case, but we pretty much have a strong intuition when a concept does apply. This is what happens to recursively enumerable sets: if a number belongs to a R.E. set, you will find out, but if it doesn’t, you need to wait an infinite amount of time. Systems that take seriously the idea that confirmation of truth is easy falls under the banner of “geometric logic”, whose algebraic model are frames, and topologies are just frames of subsets. So I see the relation between “facts” and “concepts” a little bit like the relation between “points” and “open sets”, but more in a “internal language of a topos” or “pointless topology” fashion: we don’t have access to points per se, only to open sets, and we imagine that points are infinite chains of ever precise open sets