Not fully, unfortunately. Although a baseline would be asking an LLM to convert my latex file into markdown that allows mathjax
notfnofn
[Question] Turning latexed notes into blog posts
Someone recently tried to sell me on the Ontological Argument for God which begins with “God is that for which nothing greater can be conceived.” For the reasons you described, this is completely nonsensical, but it was taken seriously for a long time (even by Bertrand Russell!), which made me realize how much I took modern logic for granted
I didn’t think much of your comment at the time, but I think it’s extremely central to the whole thing now. We go from unconscious to conscious almost all at once.
Are there any other nice decision problems that are low? A quick search only reveals existence theorems.
Intuitive guess: Can we get some hierarchy from oracles to increasingly sparse subsets of the digits of Chaitin’s constant?
*The last two bullet points. Meta-consciousness and self-consciousness
I meant if you had any suggested rewords, because there don’t seem to be any perfect definitions of these concepts.
“Easy problems of consciousness” is an established term that is a bit better-defined than consciousness. By transcending, I just meant beyond what can be explained by solving the easy problems of consciousness
This was actually what I meant by a version of panpsychism that seemed to be the natural conclusion of humans having subjective experiences, but a conclusion I want to see if I can avoid.
I tried some different definitions of consciousness while writing this point, until settling on “able have subjective experiences that transcend the ‘easy problems of consciousness’”
Do you have any suggestions for making this more precise?
I’d like to explore these in more depth, but for now I’ll just reduce all the angles you provided to the helpful summaries/applications you provided. I’ll call the perspective of going from adult human to zygote the “physical history” and the perspective of going up the ancestral tree as the “information history” (for simplicity, maybe we stop as soon as we hit a single-celled organism).
Sentience: This feels like a continuous thing that gets less and less sophisticated as we go up the information history. In each generation, the code gets a little better at using the laws of physics and chemistry to preserve itself. Of course if one has a threshold for what counts as sentience, it will cross it at some point, but this still strikes me as continuous.
Wakefulness: This would strike me as a quantized thing from both the information and physical history perspective. At some point in both histories, the organism/cell would pick up some cyclic behavior.
Intentionality: I’d need to look more at this, because my interpretation of your first sentence doesn’t make sense with the second.
Phenomonal, Self-Consciousness, Meta-Consciousness: Definitely quantized in both perspectives
When I was thinking of subjective experience, I think the only concepts here that are either weaker or stronger than what I had in mind are the last two. For the rest, I think I can both imagine a robot that satisfies the conditions and imagine a conscious being that does not satisfy the condition.
But the last two still feel too strong. I will think more about it.
That’s a bit of a long read, and both your endorsement and the title seem too strong to be believable. If a few more people endorse that it’s worth reading, I’ll give it a go!
[Question] Quantized vs. continuous nature of qualia
Very nice! Notice that if you write as , and play around with binomial coefficients a bit, we can rewrite this as:
which holds for as well, in which case it becomes the derivative product rule. This also matches the formal power series expansion of , which one can motivate directly
(By the way, how do you spoiler tag?)
This is true, but I’m looking for an explicit, non-recursive formula that needs to handle the general case of the kth anti-derivative (instead of just the first).
The solution involves doing something funny with formal power series, like in this post.
Here’s a puzzle I came up with in undergrad, based on this idea:
Let be a function with nice derivatives and anti-derivatives (like exponentials, sine, or cosine) and be a polynomial. Express the th anti-derivative of in terms of derivatives and anti-derivatives of and .
Can provide link to a post on r/mathriddles with the answer in the comments upon request
Suppose we don’t have any prior information about the dataset, only our observations. Is any metric more accurate than assuming our dataset is the exact distribution and calculating mutual information? Kind of like bootstrapping.
For the second paragraph, we’re assuming this AI has not made a mistake in predicting human behavior yet after many, many trials in different scenarios. No exact probability. We’re also assuming perfect levels of observation, so we know that they pressed a button, bombs are heading over, and any observable context behind the decision (like false information).
The first paragraph contains an idea I hadn’t considered, and it might be central to the whole thing. I’ll ponder it more.
I didn’t get around to providing more clarity. I’ll do that now:
Both parties would click the button if it was clear that the other party would not click the button in retaliation. This way they do not have to worry about being wiped off the map.
The two parties would both prefer a world in which only the other party survives to a world without any humanity.
We know that the other party will click the button if and only if they predict with extremely high confidence that we will not retaliate. Our position is the same.
It’s extremely beautiful, and seems like it would serve as a nice introduction to the website that isn’t subject to the same random noise as the front page.
I really like ‘leastwrong’ in the url and top banner (header?), but I could see how making ‘The LeastWrong’ the actual title could rub off on some as pretentious.
Frequentist and Bayesian reasoning are two ways to handle Knightian uncertainty. Frequentism gives you statements that are outright true in the face of this uncertainty, which is fantastic. But this sets an incredibly high bar that is very difficult to work with.
For a classic example, let’s say you want have a possibly biased coin in front of you and you want to say something about its rate of heads. From frequentism, you can lock in a method of obtaining a confidence interval after, say, 100 flips and say “I’m about to flip this coin 100 times and give you a confidence interval for p_heads. The chance that the interval will contain p_heads is at least 99%, regardless of what the true value of p_heads is” There’s no Bayesian analogue.
Now let’s say I had a complex network of conditional probability distributions with a bunch of parameters which have Knightian uncertainty. Getting confidence regions will be extremely expensive, and they’ll probably be way too huge to be useful. So we put on a convenient prior and go.
ETA: Randomized complexity classes also feel fundamentally frequentist.