(I have successfully done Unbendable Arm after Valentine showed me in person, without explaining any of the biomechanics. My experience of it didn’t involve visualization, but felt like placing my fingertips on the wall across the room and resolving that they’d stay there. Contra jimmy’s comment, IIRC I initially held my arm wrong without any cueing.)
Strongly related: Believing In. From that post:
My guess is that for lack of good concepts for distinguishing “believing in” from deception, LessWrongers, EAs, and “nerds” in general are often both too harsh on folks doing positive-sum “believing in,” and too lax on folks doing deception. (The “too lax” happens because many can tell there’s a “believing in”-shaped gap in their notions of e.g. “don’t say better things about your start-up than a reasonable outside observer would,” but they can’t tell its exact shape, so they loosen their “don’t deceive” in general.)
I feel like this post is similarly too lax on, not deception, but propositional-and-false religious beliefs.
That article is sloppily written enough to say “Early testers report that the AI [i.e. o3 and/or o4-mini] can generate original research ideas in fields like nuclear fusion, drug discovery, and materials science; tasks usually reserved for PhD-level experts” linking, as a citation, to OpenAI’s January release announcement of o3-mini.
TechCrunch attributes the rumor to a paywalled article in The Information (and attributes the price to specialized agents, not o3 or o4-mini themselves).