This isn’t exactly what you’re asking for, but I doubt there is a P(personality type | trait) table anywhere. You’re talking about a high-dimensional space and a single trait does not have much predictive power in isolation.
If I had enough data points of people’s personality traits, I could stick it in something like Weka, look for empirical clusters (using something like k-means or hierarchical clustering, and so forth), then train a number of classifiers to sort individual people into these clusters given a limited number of personality trait observations.
There are all sorts of forms these classifiers could take. You could do the same sort of thing wedrifed is thinking of: assume that traits are independent and use the p(personality type | trait) values that have the most predictive power to classify a person. This would be a naive Bayes classifier, of the sort that’s fantastically effective at spam filtering.
If you wanted to make something simpler—perhaps something you could print out as a handy pocket guide to classifying people—you could use a decision tree. That’s like a precomputed strategy for playing 20 questions, where you only ask questions whose answers pay rent. It’s approximate, but it can work surprisingly well. A related method is to build several randomized decision trees and have them vote.
Of course, once you build a classifier, that’s a hypothesis about some structure in reality. You need to test that hypothesis before you rush forth and start putting your trust in it. For that, you can hold some of the data in reserve, and see how a classifier built from the rest of the data performs on it. If you break your data up into n groups and take turns letting each group be the testing data set, this can tell you if your general method for generating classifiers is working for this data set.
Of course this is all terribly ad-hoc, but the Bayesian ideal approach is hard to compute here, and often these hacks work surprisingly well.
If I had enough data points of people’s personality traits, I could stick it in something like Weka, look for empirical clusters (using something like k-means or hierarchical clustering, and so forth), then train a number of classifiers to sort individual people into these clusters given a limited number of personality trait observations.
There are all sorts of forms these classifiers could take. You could do the same sort of thing wedrifed is thinking of: assume that traits are independent and use the p(personality type | trait) values that have the most predictive power to classify a person. This would be a naive Bayes classifier, of the sort that’s fantastically effective at spam filtering.
If you wanted to make something simpler—perhaps something you could print out as a handy pocket guide to classifying people—you could use a decision tree. That’s like a precomputed strategy for playing 20 questions, where you only ask questions whose answers pay rent. It’s approximate, but it can work surprisingly well. A related method is to build several randomized decision trees and have them vote.
Of course, once you build a classifier, that’s a hypothesis about some structure in reality. You need to test that hypothesis before you rush forth and start putting your trust in it. For that, you can hold some of the data in reserve, and see how a classifier built from the rest of the data performs on it. If you break your data up into n groups and take turns letting each group be the testing data set, this can tell you if your general method for generating classifiers is working for this data set.
Of course this is all terribly ad-hoc, but the Bayesian ideal approach is hard to compute here, and often these hacks work surprisingly well.