What does it tell about me that I mentally weighed “Highly Advanced” on a scale pan and “101″ and “for Beginners” on the other pan?
I would have inverted the colours in the “All possible worlds” diagram (but with a black border around it) -- light-on-black reminds me of stars, and thence of the spatially-infinite-universe-including-pretty-much-anything idea, which is not terribly relevant here, whereas a white ellipse with a black border reminds me of a classical textbook Euler-Venn diagram.
an infinite family of truth-conditions: • The sentence ‘snow is white’ is true if and only if snow is white. • The sentence ‘the sky is blue’ is true if and only if the sky is blue.
What does it tell about me that I immediately thought ‘what about sentences whose meaning depends on the context’? :-)
What does it tell about me that on seeing the right-side part of the picture just about the koan, my System 1 expected to see infinite regress and was disappointed when the innermost frame didn’t included a picture of the guy, and that my System 2 then thought ‘what kind of issue EY is neglecting does this correspond to’?
my beliefs determine my experimental predictions, but only reality gets to determine my experimentalresults.
What does it tell about me that I immediately thought ‘what about placebo and stuff’ (well, technically its aliefs that matter there, not beliefs, but not all of the readers will know the distinction)?
What does it tell about me that I immediately thought ‘what about placebo and stuff’
Your beliefs about the functionality of a “medicine,” and the parts of your physiology that make the placebo effect work, are both part of reality. Your beliefs can, in a few (really annoying!) cases, affect their own truth or falsity, but whenever this happens there’s a causal chain leading from the neural structure in your head to the part of reality in question that’s every bit as valid as the causal chain in the shoelace example.
I think that if you’re human, these cases are way more common than ISTM certain people realize. So in such discussions I’d always make clear if I’m talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.
Can I have a couple examples other than placebo affect? Preferably only one of which is in the class “confidence that something will work makes you better at it”? Partly because it’s useful to ask for examples, partly because it sounds useful to know about situations like this.
Actually, pretty much all I had in mind was in the class “confidence that something will work makes you better at it”—but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom’s information hazards also are relevant.
what about sentences whose meaning depends on the context
Ehn, the truth value depends on context too. “That girl over there heard what this guy just said” is true if that girl over there heard what this guy just said, false if she didn’t, and meaningless if there’s no girl or no guy or he didn’t say anything.
what kind of issue EY is neglecting does this correspond to
what kind of issue EY is neglecting does this correspond to
Common knowledge, in general?
I was thinking more about stuff like, “but reality does also include my map, so a map of reality ought to include a map of itself” (which, as you mentioned, is related to my point about placebo-like effects).
What does it tell about me that I mentally weighed “Highly Advanced” on a scale pan and “101″ and “for Beginners” on the other pan?
I would have inverted the colours in the “All possible worlds” diagram (but with a black border around it) -- light-on-black reminds me of stars, and thence of the spatially-infinite-universe-including-pretty-much-anything idea, which is not terribly relevant here, whereas a white ellipse with a black border reminds me of a classical textbook Euler-Venn diagram.
What does it tell about me that I immediately thought ‘what about sentences whose meaning depends on the context’? :-)
What does it tell about me that on seeing the right-side part of the picture just about the koan, my System 1 expected to see infinite regress and was disappointed when the innermost frame didn’t included a picture of the guy, and that my System 2 then thought ‘what kind of issue EY is neglecting does this correspond to’?
What does it tell about me that I immediately thought ‘what about placebo and stuff’ (well, technically its aliefs that matter there, not beliefs, but not all of the readers will know the distinction)?
Your beliefs about the functionality of a “medicine,” and the parts of your physiology that make the placebo effect work, are both part of reality. Your beliefs can, in a few (really annoying!) cases, affect their own truth or falsity, but whenever this happens there’s a causal chain leading from the neural structure in your head to the part of reality in question that’s every bit as valid as the causal chain in the shoelace example.
I think that if you’re human, these cases are way more common than ISTM certain people realize. So in such discussions I’d always make clear if I’m talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.
Can I have a couple examples other than placebo affect? Preferably only one of which is in the class “confidence that something will work makes you better at it”? Partly because it’s useful to ask for examples, partly because it sounds useful to know about situations like this.
Actually, pretty much all I had in mind was in the class “confidence that something will work makes you better at it”—but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom’s information hazards also are relevant.
Ehn, the truth value depends on context too. “That girl over there heard what this guy just said” is true if that girl over there heard what this guy just said, false if she didn’t, and meaningless if there’s no girl or no guy or he didn’t say anything.
Common knowledge, in general?
Beliefs are a strict subset of reality.
I was thinking more about stuff like, “but reality does also include my map, so a map of reality ought to include a map of itself” (which, as you mentioned, is related to my point about placebo-like effects).