PCs are also systems; they’re just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn’t predict exactly how you would do them, I have no choice but to model you as an ‘intelligence’. But that’s, well… really rare.
I guess for me it’s not incredibly rare that people successfully do things and I can’t predict exactly how they would do them. It doesn’t seem to be the main distinction that my brain uses to model PC-ness versus NPC-ness, though.
I find this comment...very, very fun, and very, very provocative.
Are you up for—in a spirit of fun—putting it to the test? Like, people could suggest goals that the successful completion of which would potentially label themselves as “an Intelligence” according to Eliezer Yudkowky—and then you would outline how you would do it? And if you either couldn’t predict the answer, or we did it in a way different enough from your predictions (as judged by you!), we’d get bragging rights thereafter? (So for instance, we could put in email sigs, “An intelligence, certified Eliezar Yudkowky.” That kind of thing.)
A few goals right off the top of my head:
Raise $1000 for MIRI or CFAR
Get a celebrity to rec HPMOR, MIRI or CFAR (the term “celebrity” would require definition)
Convince Eliezer to change his mind on any one topic of significance (as judged by himself)
Solve any “open question” that EY cares to list (again, as judged by himself—I know that “how to lose weight” is such a question, and presumably there are others)
Basically the idea is that we get to posit things we think we know how to do and you don’t… and you get to posit things that you don’t know how to do but would like to...and then if we “win” we get bragging rights.
There’s pretty obviously some twisted incentives here (mostly in your favor!) but we’ll just have to assume that you’re a man of honor. And by “a man of honor” I mean “a man whose reputation is worth enough that he won’t casually throw a match.”
Do you mean to say that you can generally predict not only what person A will do but precisely how they will do it? Or do you mean that if a person succeeds then you are unsurprised by how they did it, but if they fail or do something crazy you aren’t any better than other people at prediction? Either way I would be interested in hearing more about how you do that.
Since I’ve been teaching I’ve gotten much better at modeling other people—you might say I’ve gotten a hefty software patch to my Theory of Mind. Because I mostly interact with children that’s what I am calibrated to, but adult’s have also gotten much less surprising. I attribute my earlier problems mostly to lack of experience and to simply not trying very hard to model people’s motivations or predict their behavior.
Further, I’ve come to realize how important these skills are, and I aspire to reaching Quirrellesque heights of other-modeling. Some potential ways to improve theory of mind:
Study the relevant psychology/neuroscience.
Learn acting.
Carefully read fiction which explores psychology and behavior in an in-depth way (Henry James?) Plays might be even better for this, as you’d presumably have to fill in a lot of the underlying psychology on your own. In conjunction with acting this would probably be even more powerful. You could even go as far as to make bets on what characters will do so as to better calibrate your intuitions.
Write fiction which does the same.
Placing bets could be extended to real groups of people, though you might not want to let anyone know you were doing this because they might think it’s creepy and it could create a kind of anti-induction.
If you regularly associate with people of similar intelligence, how rare can that be? Even if you are the smartest person you know (unlikely considering the people you know, some of whom exceed your competence in mathematics and philosophy), anyone with more XP in certain areas would behave unpredictably in said areas, even if they had a smaller initial endowment. My guess is your means-prediction lobe is badly calibrated because after the fact you say to yourself, “I would have predicted that.”
This could be easily tested.
Intelligent people tend to only on rare occasions tackle problems where it stretches the limit of their cognitive abilities to (predict how to) solve them. Thus, most of my exposure to this comes by way of, e.g., watching mathematicians at decision theory workshops prove things in domains where I am unfamiliar—then they can exceed my prediction abilities even when they are not tackling a problem which appears to them spectacularly difficult.
Where the task they are doing has a skill requirement that you do not meet, you cannot predict how they will solve the problem.
Does that sound right? It’s more obvious that the prediction is hard when the decision is “fake-punt, run the clock down, and take the safety instead of giving them the football with so much time left” rather than physical feats. Purely mental feats are a different kind of different.
My scepticism depends on how detailed your predictions are, though your fiction/rhetorical abilities likely stem in part form unusually good person-modelling abilities. Do you find yourself regularly and correctly predicting how creative friends will navigate difficult social situations or witty conversations, e.g, guessing punchlines to clever jokes, predicting the coarse of a status game?
I may be confused about the “resolution” of your predictions. Suppose you were trying to predict how intelligent person x will seduce intelligent person y. If you said, “X will appeal to Y’s vanity and then demonstrate social status.” I feel that kind of prediction is pretty trivial. But predicting more exactly how X would do this seems vastly more difficult. How would you rate your abilities in this situation if 1 equals predictions at the resolution of the given example and 10 equals “I could draw you a flow chart which will more-or-less describe the whole of their interaction.”
I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered “competent” at its job.
PCs are also systems; they’re just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn’t predict exactly how you would do them, I have no choice but to model you as an ‘intelligence’. But that’s, well… really rare.
I guess for me it’s not incredibly rare that people successfully do things and I can’t predict exactly how they would do them. It doesn’t seem to be the main distinction that my brain uses to model PC-ness versus NPC-ness, though.
I find this comment...very, very fun, and very, very provocative.
Are you up for—in a spirit of fun—putting it to the test? Like, people could suggest goals that the successful completion of which would potentially label themselves as “an Intelligence” according to Eliezer Yudkowky—and then you would outline how you would do it? And if you either couldn’t predict the answer, or we did it in a way different enough from your predictions (as judged by you!), we’d get bragging rights thereafter? (So for instance, we could put in email sigs, “An intelligence, certified Eliezar Yudkowky.” That kind of thing.)
A few goals right off the top of my head:
Raise $1000 for MIRI or CFAR
Get a celebrity to rec HPMOR, MIRI or CFAR (the term “celebrity” would require definition)
Convince Eliezer to change his mind on any one topic of significance (as judged by himself)
Solve any “open question” that EY cares to list (again, as judged by himself—I know that “how to lose weight” is such a question, and presumably there are others)
Basically the idea is that we get to posit things we think we know how to do and you don’t… and you get to posit things that you don’t know how to do but would like to...and then if we “win” we get bragging rights.
There’s pretty obviously some twisted incentives here (mostly in your favor!) but we’ll just have to assume that you’re a man of honor. And by “a man of honor” I mean “a man whose reputation is worth enough that he won’t casually throw a match.”
I dunno, does that sound fun to anybody else?
Do you mean to say that you can generally predict not only what person A will do but precisely how they will do it? Or do you mean that if a person succeeds then you are unsurprised by how they did it, but if they fail or do something crazy you aren’t any better than other people at prediction? Either way I would be interested in hearing more about how you do that.
Since I’ve been teaching I’ve gotten much better at modeling other people—you might say I’ve gotten a hefty software patch to my Theory of Mind. Because I mostly interact with children that’s what I am calibrated to, but adult’s have also gotten much less surprising. I attribute my earlier problems mostly to lack of experience and to simply not trying very hard to model people’s motivations or predict their behavior.
Further, I’ve come to realize how important these skills are, and I aspire to reaching Quirrellesque heights of other-modeling. Some potential ways to improve theory of mind:
Study the relevant psychology/neuroscience.
Learn acting.
Carefully read fiction which explores psychology and behavior in an in-depth way (Henry James?) Plays might be even better for this, as you’d presumably have to fill in a lot of the underlying psychology on your own. In conjunction with acting this would probably be even more powerful. You could even go as far as to make bets on what characters will do so as to better calibrate your intuitions.
Write fiction which does the same.
Placing bets could be extended to real groups of people, though you might not want to let anyone know you were doing this because they might think it’s creepy and it could create a kind of anti-induction.
That sounds like a very useful sequence.
If you regularly associate with people of similar intelligence, how rare can that be? Even if you are the smartest person you know (unlikely considering the people you know, some of whom exceed your competence in mathematics and philosophy), anyone with more XP in certain areas would behave unpredictably in said areas, even if they had a smaller initial endowment. My guess is your means-prediction lobe is badly calibrated because after the fact you say to yourself, “I would have predicted that.” This could be easily tested.
Intelligent people tend to only on rare occasions tackle problems where it stretches the limit of their cognitive abilities to (predict how to) solve them. Thus, most of my exposure to this comes by way of, e.g., watching mathematicians at decision theory workshops prove things in domains where I am unfamiliar—then they can exceed my prediction abilities even when they are not tackling a problem which appears to them spectacularly difficult.
Where the task they are doing has a skill requirement that you do not meet, you cannot predict how they will solve the problem.
Does that sound right? It’s more obvious that the prediction is hard when the decision is “fake-punt, run the clock down, and take the safety instead of giving them the football with so much time left” rather than physical feats. Purely mental feats are a different kind of different.
My scepticism depends on how detailed your predictions are, though your fiction/rhetorical abilities likely stem in part form unusually good person-modelling abilities. Do you find yourself regularly and correctly predicting how creative friends will navigate difficult social situations or witty conversations, e.g, guessing punchlines to clever jokes, predicting the coarse of a status game?
I may be confused about the “resolution” of your predictions. Suppose you were trying to predict how intelligent person x will seduce intelligent person y. If you said, “X will appeal to Y’s vanity and then demonstrate social status.” I feel that kind of prediction is pretty trivial. But predicting more exactly how X would do this seems vastly more difficult. How would you rate your abilities in this situation if 1 equals predictions at the resolution of the given example and 10 equals “I could draw you a flow chart which will more-or-less describe the whole of their interaction.”
Relevant: an article that explains the Failed Simulation Effect, by Cal Newport.
I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered “competent” at its job.
Of course, for mere mortals, it’s be somewhat less rare …
Woah, I bet that’s where the whole “anyone more than a certain amount smarter than me is simply A Smart Person” phenomenon.