I don’t agree that I am making unwarranted assumptions; I think what you call “assumptions” are merely observations about the meanings of words. I agree that it is hard to program an AI to determine who the “he”s refer to, but I think as a matter of fact the meanings of those words don’t allow for any other possible interpretation. It’s just hard to explain to an AI what the meanings of words are. Anyway I’m not sure if it is productive to argue this any further as we seem to be repeating ourselves.
Dacyn
No, because John could be speaking about himself administering the medication.
If it’s about John administering the medication then you’d have to say ”… he refused to let him”.
It’s also possible to refuse to do something you’ve already acknowledged you should do, so the 3rd he could still be John regardless of who is being told what.
But the sentence did not claim John merely acknowledged that he should administer the medication, it claimed John was the originator of that statement. Is John supposed to be refusing his own requests?
John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.
Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.
I don’t think the sentence is actually as ambiguous as you’re saying. The first and third “he”s both have to refer to Mark, because you can only refuse to do something after being told you should do it. Only the second “he” could be either John or Mark.
Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.
Why do you call current AI models “agentic”? It seems to me they are more like tool AI or oracle AI...
I am still seeing “succomb”.
In the long scale a trillion is 10^18, not 10^24.
I say “zero” when reciting phone numbers. Harder to miss that way.
I think you want to define to be true if is true when we restrict to some neighbourhood such that is nonempty. Otherwise your later example doesn’t make sense.
I noticed all the political ones were phrased to support the left-wing position.
This doesn’t completely explain the trick, though. In the step where you write f=(1-I)^{-1} 0, if you interpret I as an operator then you get f=0 as the result. To get f=Ce^x you need to have f=(1-I)^{-1} C in that step instead. You can get this by replacing \int f by If+C at the beginning.
If you find yourself thinking about the differences between geometric expected utility and expected utility in terms of utility functions, remind yourself that, for any utility function, one can choose* either* averaging method.
No, you can only use the geometric expected utility for nonnegative utility functions.
It’s obvious to us that the prompts are lying; how do you know it isn’t also obvious to the AI? (To the degree it even makes sense to talk about the AI having “revealed preferences”)
Calvinists believe in predestination, not Protestants in general.
Wouldn’t that mean every sub-faction recursively gets a veto? Or do the sub-faction vetos only allow the sub-faction to veto the faction veto, rather than the original legislation? The former seems unwieldy, while the latter seems to contradict the original purpose of DVF...
(But then: aren’t there zillions of Boltzmann brains with these memories of coherence, who are making this sort of move too?)
According to standard cosmology, there are also zillions of actually coherent copies of you, and the ratio is heavily tilted towards the actually coherent copies under any reasonable way of measuring. So I don’t think this is a good objection.
“Only food that can be easily digested will provide calories”
That statement would seem to also be obviously wrong. Plenty of things are ‘easily digested’ in any reasonable meaning of that phrase, while providing ~0 calories.
I think you’ve interpreted this backwards; the claim isn’t that “easily digested” implies “provides calories”, but rather that “provides calories” implies “easily digested”.
In constructivist logic, proof by contradiction must construct an example of the mathematical object which contradicts the negated theorem.
This isn’t true. In constructivist logic, if you are trying to disprove a statement of the form “for all x, P(x)”, you do not actually have to find an x such that P(x) is false—it is enough to assume that P(x) holds for various values of x and then derive a contradiction. By contrast, if you are trying to prove a statement of the form “there exists x such that P(x) holds”, then you do actually need to construct an example of x such that P(x) holds (in constructivist logic at least).
Just a technical point, but it is not true that most of the probability mass of a hypothesis has to come from “the shortest claw”. You can have lots of longer claws which together have more probability mass than a shorter one. This is relevant to situations like quantum mechanics, where the claw first needs to extract you from an individual universe of the multiverse, and that costs a lot of bits (more than just describing your full sensory data would cost), but from an epistemological point of view there are many possible such universes that you might be a part of.
As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.
Axioms are only “true” or “false” relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all “ordinals”, but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don’t see why this would generate any contradiction with also being able to talk about a canonical model.
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don’t see any difficulty to be reconciled.