I think that if you’re human, these cases are way more common than ISTM certain people realize. So in such discussions I’d always make clear if I’m talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.
Can I have a couple examples other than placebo affect? Preferably only one of which is in the class “confidence that something will work makes you better at it”? Partly because it’s useful to ask for examples, partly because it sounds useful to know about situations like this.
Actually, pretty much all I had in mind was in the class “confidence that something will work makes you better at it”—but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom’s information hazards also are relevant.
I think that if you’re human, these cases are way more common than ISTM certain people realize. So in such discussions I’d always make clear if I’m talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.
Can I have a couple examples other than placebo affect? Preferably only one of which is in the class “confidence that something will work makes you better at it”? Partly because it’s useful to ask for examples, partly because it sounds useful to know about situations like this.
Actually, pretty much all I had in mind was in the class “confidence that something will work makes you better at it”—but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom’s information hazards also are relevant.