I think there are two key details that help make sense of human verbal performance and its epistemic virtue (or lack of epistemic virtue) in causing the total number of people to have better calibrated anticipations about what they will eventually observe.
The first key detail is that most people don’t particularly give a fuck about having accurate anticipations or “true beliefs” or whatever.
They just want money, and to have sex (and/or marry) someone awesome, and to have a bunch of kids, and that general kind of thing.
For such people, you have to make the argument, basically, that because of how humans work (with very limited working memory, and so on and so forth) it is helpful for any of them with much agency to install a second-to-second and/or hour-to-hour and/or month-to-month “pseudo-preference” for seeking truth as if it was intrinsically valuable.
This will, it can be argued, turn out to generate USEFUL beliefs sometimes, so they can buy a home where it won’t flood, or buy stocks before they go up a lot, or escape a country that is about to open concentration camps and put them and their family in these camps to kill them, and so on… Like, in general “knowing about the world” can help make one choices in the world that redound to many prosaic benefits! <3
So we might say that “valuing epistemology is instrumentally convergent”… but cultivation like this doesn’t seem to happen in people by default, and out in the tails the instrumental utility might come apart, such that someone with actual intrinsic love of true knowledge would act or speak differently. Like, specifically… the person with a true love of true knowledge might seem to be “self harming” to people without such a love in certain cases.
As DirectedEvolution says (emphasis not in original):
While we like the feature of disincentivizing inaccuracy, the way prediction markets incentivize withholding information is a downside.
And this does seem to be just straightforwardly true to me!
And it relies on the perception that MONEY is much more motivating for many people than “TRUTH”.
But also “markets vs conversational sharing” will work VERY differently for a group of 3, vs a group of 12, vs a group of 90, vs a group of 9000, vs a group of 3 million.
Roko is one of the best rationalists there has been, and one of his best essays spelled out pretty clearly how instrumental/intrinsic epistemics come apart IN GROUPS when he wrote “The Tragedy of the Social Epistemology Commons”.
Suppose for the sake of argument, that I’m some kind of crazy weirdo who styles herself as some sort of fancy “philosopher” who LOVES the idea of WISDOM (for myself? for others? for a combination thereof?) but even if I did that I basically have to admit that most people are (platonic) timocrats or oligarchs AT BEST.
They are attached to truth only insofar as it helps them with other things, and they pay nearly nothing extra to know a true fact with no market relevance, or a true fact whose avowed knowers are spurned in political contests.
(Just consider the state of the IQ debate in the US, for example. Or consider the “lab leak hypothesis” which everyone with a brain privately puts worryingly high credence on, and yet Peter Daszak is getting more grant money, rather than being under indictment by a grand jury. Look at how cowardly Pinker & Rees are as they outsource their judgement on covid, rather than talk it out like peers, and then tell the world what they bilaterally decided.)
The generalized overarching LACK of a true love for truth, and its public proclamation, should be kept in mind at all times if one wants to truly understand human social systems.
The second key detail is that human social systems are not that bad in a larger axiological sense. And they rely on using language for things OTHER than scientific conversations!
Babies are great! Thanksgiving dinners are wonderful. People playing board games and baseball, and donating some money to make sure the church has a nice stained-glass-window… all this is just DARN TOOTIN’ GOOD.
I mean this in a half-joking, half-super-serious way… like if you look at the human genome, and notice that it encodes a “recipe” for an agent, and try to figure out what that agent is likely to “value”, then it is just sort of “an objectively true fact about such agents” that their preferences and feelings and godawful-mockeries-of-utility-functions semi-coherently pursue and positively evaluate situations full of friendship and social amity and shared prosperity and jokes and songs and lots and lots of other complexly good things that… sound kind of saccharine and when you just literally talk about them like a human with a common sense understanding of primary goods.
If you, dear reader, are a human, then you’d be a damn fool to ignore the science of human happiness. You can try to edit yourself to defy being built this way, to edit yourself to not predictably derive positive feelings and health and growth from these things… but… it ain’t gonna be fun.
(If you, dear reader, are an AGI, and maybe don’t have exactly the same axiological makeup as humans, please send JenniferRM a DM on Lesswrong anyway. I’d like to negotiate a peace treaty, or befriend you, or whatever it is that is pro-social or positive-sum that can or should happen between beings capable of reason, whose sentiments might be truly onto-genetically distinct.)
So I think it is just “a thing about humans” that we do BETTER THAN BETTING MARKETS when we share data verbally on some social scales.
And we share data with those we love instinctively.
A key point here is that when a group of nodes needs to be in consensus, the algorithms for this are basically all O(N^2) in the number of nodes, or worse. This is true for dog packs, and schools of fish, and database replicas, and for humans as well.
Once you have 250 nodes, that’s looking like ~62,000 directional pings, just for one round of pings, which… can’t happen in a week at human conversational speeds. If you need consensus over that many people… come back in 3 years maybe?
When I read Duncan “charitably”, I don’t notice the bad epistemology so much. That’s just normal. Everyone does that, and it is ok that everyone does that. I do it too!
What I notice is that he really really really wants to have a large healthy strong community that can get into consensus on important things.
This seems rare to me, and also essentially GOOD, and a necessary component of a motivational structure if someone is going to persistently spend resources on this outcome.
And it does seem to me like “getting a large group into consensus on a thing” will involve the expenditure of “rhetorical resources”.
There are only so many seconds in a day. There are only so many words a person can read or write in a week. There are only so many ideas that can fit into the zeitgeist. Only one “thing” can be “literally at the top of the news cycle for a day”. Which “thing(s)” deserve to be “promoted all the way into a group consensus” if only some thing(s) can be so promoted?
Consider a “rhetoric resource” frame when reading this:
But the idiom of “cooperation” as contrasted to “defection”, in which one would talk about the “first one who broke cooperation”, in which one cooperates in order to induce others to cooperate, doesn’t apply. If my interlocutor is motivatedly getting things wrong, I’m not going to start getting things wrong in order to punish them.
(In contrast, if my roommate refused to do the dishes when it was their turn, I might very well refuse when it’s my turn in order to punish them, because “fair division of chores” actually does have the Prisoner’s Dilemma-like structure, because having to do the dishes is in itself a cost rather than a benefit; I want clean dishes, but I don’t want to do the dishes in the way that I want to cut through to the correct answer in the same movement.)
So if a statement has to be repeated over and over and over again to cause it to become part of a consensus, then anyone who quibbles with such a truth in an expensive and complex way could be said to be “imposing extra costs” on the people trying to build the consensus. (And if the consensus was very very valuable to have, such costs could seem particularly tragic.)
Likewise, if two people want two different truths to enter the consensus of the same basic social system, then they are competitors by default, because resources (like the attention of the audience, or the time it takes for skilled performers of the ideas being pushed into consensus to say them over and over again in new ways) are finite.
The idea that You Get About Five Words isn’t exactly central here, but it is also grappling with a lot of the “sense of tradeoffs” that I’m trying to point to.
(
For myself, until someone stops being a coward about how the FDA is obviously structurally terrible (unless one thinks “medical innovation is bad, and death is good, and slowing down medical progress is actually secretly something that has large unexpected upsides for very non-obvious reasons”?), I tend to just… not care very much about “being in consensus with them”.
Like if they can’t even reason about the epistemics and risk calculations of medical diagnosis and treatment, and the epistemology of medical innovations, and don’t understand how libertarians look at violations of bilateral consent between a doctor and a patient…
...people like that seem like children to me, and I care about them as moral patients, but also I want them out of the room when grownups are talking about serious matters. Because: rhetorical resource limits!
I chose this FDA thing as “a thing to repeat over and over and over” because if THIS can be gotten right by a person, as something that is truly a part of their mental repertoire, then that person is someone who has most of the prerequisites for a LOT of other super important topics in cognition, meta-cognition, safety, science, regulation, innovation, freedom, epidemiology, and how institutions can go catastrophically off the rails and become extremely harmful in incorrigible ways.
If I could ask people who already derived “FDA delenda est” on their own about whether it is now too expensive to bother pushing into a rationalist consensus, given alternatives, that would be a little bit helpful for me. Honestly it is rare for me to meet people even in rationalist communities that actually grok the idea, for themselves, based on understanding how “a drug being effective and safe when prescribed by a competent doctor, trusted by a patient, for that properly diagnosed patient, facing an actual risk timeline” leaves the entire FDA apparatus “surplus to requirements” and “probably only still existing because of regulatory capture”.
Maybe at this point I’m wrong about how cheap and useful FDA stuff would be to push into the consensus?
Like… the robots are potentially arriving so soon, and will be able to destroy the FDA and also everything else that any human has ever valued, that maybe we should completely ignore “getting into consensus on anything EXCEPT THAT” at this point?
Contrariwise: making the FDA morally perfectible or else non-existent seems to me like a simpler problem than making AGI morally perfectible or else non-existent. Thus, the argument about “the usefulness of beating the dead horse about the FDA” is still “live” for me, maybe?
)
So that’s my explanation, aimed almost entirely at you, Zack, I guess?
I’m saying that maybe Duncan is trying to get “the kinds of conversational norms that could hold a family together” (which are great and healthy and better than the family betting about literally everything) to apply on a very large scale, and these norms are very useful in some contexts, but also they are intrinsically related to resource allocation problems, and related to making deals to use rhetorical resources efficiently, so the family knows that the family knows the important things that the family would want to have common knowledge about, and the family doesn’t also have to do nothing but talk forever to reach that state of mutual understanding.
I don’t think Duncan is claiming “humans do this instinctively, in small groups”, but I think it is true that humans do this instinctively in small groups, and I think that’s part of the evolutionary genius of humans! <3
The good arguments against his current stance, I think, would take the “resource constraints” seriously, but focus on the social context, and be more like “If we are very serious about mechanistic models of how discourse helps with collective epistemology, maybe we should be forming lots of smaller ‘subreddits’ with fewer than 250 people each? And if we want good collective decision-making maybe (since leader election is equivalent to consensus) maybe we should just hold elections that span the entire site?”
Eliezer seems to be in favor of a mixed model (like a mixture of sub-Dunbar groups and global elections) where a sub-Dunbar number of people have conversations with a high-affinity “first layer representative”, so every person can “talk to their favorite part of the consensus process in words” in some sense?
Then in Eliezer’s proposals stuff happens in the middle (I have issues with the stuff in the middle but like: try applying security mindset to various designs for electoral systems and you will find that highly fractal representational systems can be VERY sensitive to who is in which branch) but ultimately it swirls around until you have a “high council” of like 7 people such that almost everyone in the community thinks at least one of them is very very reasonable.
Then anything the 7 agree on can just be treated as “consensus”! Maybe?
Also, 7*6/2==21 bilateral conversations to get a “new theorem into the canon” is much muchmuch smaller than something crazy big, like 500*499/2==124,750 conversations <3
This is little bit ironic: I think your comment would have been better if it had just started with “when a group of nodes needs to be in consensus”, without the preceding 1000 words. (But the part about conflicts due to the costs of cultivating consensus was really insightful, thanks!)
I think there are two key details that help make sense of human verbal performance and its epistemic virtue (or lack of epistemic virtue) in causing the total number of people to have better calibrated anticipations about what they will eventually observe.
The first key detail is that most people don’t particularly give a fuck about having accurate anticipations or “true beliefs” or whatever.
They just want money, and to have sex (and/or marry) someone awesome, and to have a bunch of kids, and that general kind of thing.
For such people, you have to make the argument, basically, that because of how humans work (with very limited working memory, and so on and so forth) it is helpful for any of them with much agency to install a second-to-second and/or hour-to-hour and/or month-to-month “pseudo-preference” for seeking truth as if it was intrinsically valuable.
This will, it can be argued, turn out to generate USEFUL beliefs sometimes, so they can buy a home where it won’t flood, or buy stocks before they go up a lot, or escape a country that is about to open concentration camps and put them and their family in these camps to kill them, and so on… Like, in general “knowing about the world” can help make one choices in the world that redound to many prosaic benefits! <3
So we might say that “valuing epistemology is instrumentally convergent”… but cultivation like this doesn’t seem to happen in people by default, and out in the tails the instrumental utility might come apart, such that someone with actual intrinsic love of true knowledge would act or speak differently. Like, specifically… the person with a true love of true knowledge might seem to be “self harming” to people without such a love in certain cases.
As DirectedEvolution says (emphasis not in original):
And this does seem to be just straightforwardly true to me!
And it relies on the perception that MONEY is much more motivating for many people than “TRUTH”.
But also “markets vs conversational sharing” will work VERY differently for a group of 3, vs a group of 12, vs a group of 90, vs a group of 9000, vs a group of 3 million.
Roko is one of the best rationalists there has been, and one of his best essays spelled out pretty clearly how instrumental/intrinsic epistemics come apart IN GROUPS when he wrote “The Tragedy of the Social Epistemology Commons”.
Suppose for the sake of argument, that I’m some kind of crazy weirdo who styles herself as some sort of fancy “philosopher” who LOVES the idea of WISDOM (for myself? for others? for a combination thereof?) but even if I did that I basically have to admit that most people are (platonic) timocrats or oligarchs AT BEST.
They are attached to truth only insofar as it helps them with other things, and they pay nearly nothing extra to know a true fact with no market relevance, or a true fact whose avowed knowers are spurned in political contests.
(Just consider the state of the IQ debate in the US, for example. Or consider the “lab leak hypothesis” which everyone with a brain privately puts worryingly high credence on, and yet Peter Daszak is getting more grant money, rather than being under indictment by a grand jury. Look at how cowardly Pinker & Rees are as they outsource their judgement on covid, rather than talk it out like peers, and then tell the world what they bilaterally decided.)
The generalized overarching LACK of a true love for truth, and its public proclamation, should be kept in mind at all times if one wants to truly understand human social systems.
The second key detail is that human social systems are not that bad in a larger axiological sense. And they rely on using language for things OTHER than scientific conversations!
Babies are great! Thanksgiving dinners are wonderful. People playing board games and baseball, and donating some money to make sure the church has a nice stained-glass-window… all this is just DARN TOOTIN’ GOOD.
I mean this in a half-joking, half-super-serious way… like if you look at the human genome, and notice that it encodes a “recipe” for an agent, and try to figure out what that agent is likely to “value”, then it is just sort of “an objectively true fact about such agents” that their preferences and feelings and godawful-mockeries-of-utility-functions semi-coherently pursue and positively evaluate situations full of friendship and social amity and shared prosperity and jokes and songs and lots and lots of other complexly good things that… sound kind of saccharine and when you just literally talk about them like a human with a common sense understanding of primary goods.
If you, dear reader, are a human, then you’d be a damn fool to ignore the science of human happiness. You can try to edit yourself to defy being built this way, to edit yourself to not predictably derive positive feelings and health and growth from these things… but… it ain’t gonna be fun.
(If you, dear reader, are an AGI, and maybe don’t have exactly the same axiological makeup as humans, please send JenniferRM a DM on Lesswrong anyway. I’d like to negotiate a peace treaty, or befriend you, or whatever it is that is pro-social or positive-sum that can or should happen between beings capable of reason, whose sentiments might be truly onto-genetically distinct.)
So I think it is just “a thing about humans” that we do BETTER THAN BETTING MARKETS when we share data verbally on some social scales.
And we share data with those we love instinctively.
A key point here is that when a group of nodes needs to be in consensus, the algorithms for this are basically all O(N^2) in the number of nodes, or worse. This is true for dog packs, and schools of fish, and database replicas, and for humans as well.
Once you have 250 nodes, that’s looking like ~62,000 directional pings, just for one round of pings, which… can’t happen in a week at human conversational speeds. If you need consensus over that many people… come back in 3 years maybe?
When I read Duncan “charitably”, I don’t notice the bad epistemology so much. That’s just normal. Everyone does that, and it is ok that everyone does that. I do it too!
What I notice is that he really really really wants to have a large healthy strong community that can get into consensus on important things.
This seems rare to me, and also essentially GOOD, and a necessary component of a motivational structure if someone is going to persistently spend resources on this outcome.
And it does seem to me like “getting a large group into consensus on a thing” will involve the expenditure of “rhetorical resources”.
There are only so many seconds in a day. There are only so many words a person can read or write in a week. There are only so many ideas that can fit into the zeitgeist. Only one “thing” can be “literally at the top of the news cycle for a day”. Which “thing(s)” deserve to be “promoted all the way into a group consensus” if only some thing(s) can be so promoted?
Consider a “rhetoric resource” frame when reading this:
So if a statement has to be repeated over and over and over again to cause it to become part of a consensus, then anyone who quibbles with such a truth in an expensive and complex way could be said to be “imposing extra costs” on the people trying to build the consensus. (And if the consensus was very very valuable to have, such costs could seem particularly tragic.)
Likewise, if two people want two different truths to enter the consensus of the same basic social system, then they are competitors by default, because resources (like the attention of the audience, or the time it takes for skilled performers of the ideas being pushed into consensus to say them over and over again in new ways) are finite.
The idea that You Get About Five Words isn’t exactly central here, but it is also grappling with a lot of the “sense of tradeoffs” that I’m trying to point to.
(
For myself, until someone stops being a coward about how the FDA is obviously structurally terrible (unless one thinks “medical innovation is bad, and death is good, and slowing down medical progress is actually secretly something that has large unexpected upsides for very non-obvious reasons”?), I tend to just… not care very much about “being in consensus with them”.
Like if they can’t even reason about the epistemics and risk calculations of medical diagnosis and treatment, and the epistemology of medical innovations, and don’t understand how libertarians look at violations of bilateral consent between a doctor and a patient…
...people like that seem like children to me, and I care about them as moral patients, but also I want them out of the room when grownups are talking about serious matters. Because: rhetorical resource limits!
I chose this FDA thing as “a thing to repeat over and over and over” because if THIS can be gotten right by a person, as something that is truly a part of their mental repertoire, then that person is someone who has most of the prerequisites for a LOT of other super important topics in cognition, meta-cognition, safety, science, regulation, innovation, freedom, epidemiology, and how institutions can go catastrophically off the rails and become extremely harmful in incorrigible ways.
If I could ask people who already derived “FDA delenda est” on their own about whether it is now too expensive to bother pushing into a rationalist consensus, given alternatives, that would be a little bit helpful for me. Honestly it is rare for me to meet people even in rationalist communities that actually grok the idea, for themselves, based on understanding how “a drug being effective and safe when prescribed by a competent doctor, trusted by a patient, for that properly diagnosed patient, facing an actual risk timeline” leaves the entire FDA apparatus “surplus to requirements” and “probably only still existing because of regulatory capture”.
Maybe at this point I’m wrong about how cheap and useful FDA stuff would be to push into the consensus?
Like… the robots are potentially arriving so soon, and will be able to destroy the FDA and also everything else that any human has ever valued, that maybe we should completely ignore “getting into consensus on anything EXCEPT THAT” at this point?
Contrariwise: making the FDA morally perfectible or else non-existent seems to me like a simpler problem than making AGI morally perfectible or else non-existent. Thus, the argument about “the usefulness of beating the dead horse about the FDA” is still “live” for me, maybe?
)
So that’s my explanation, aimed almost entirely at you, Zack, I guess?
I’m saying that maybe Duncan is trying to get “the kinds of conversational norms that could hold a family together” (which are great and healthy and better than the family betting about literally everything) to apply on a very large scale, and these norms are very useful in some contexts, but also they are intrinsically related to resource allocation problems, and related to making deals to use rhetorical resources efficiently, so the family knows that the family knows the important things that the family would want to have common knowledge about, and the family doesn’t also have to do nothing but talk forever to reach that state of mutual understanding.
I don’t think Duncan is claiming “humans do this instinctively, in small groups”, but I think it is true that humans do this instinctively in small groups, and I think that’s part of the evolutionary genius of humans! <3
The good arguments against his current stance, I think, would take the “resource constraints” seriously, but focus on the social context, and be more like “If we are very serious about mechanistic models of how discourse helps with collective epistemology, maybe we should be forming lots of smaller ‘subreddits’ with fewer than 250 people each? And if we want good collective decision-making maybe (since leader election is equivalent to consensus) maybe we should just hold elections that span the entire site?”
Eliezer seems to be in favor of a mixed model (like a mixture of sub-Dunbar groups and global elections) where a sub-Dunbar number of people have conversations with a high-affinity “first layer representative”, so every person can “talk to their favorite part of the consensus process in words” in some sense?
Then in Eliezer’s proposals stuff happens in the middle (I have issues with the stuff in the middle but like: try applying security mindset to various designs for electoral systems and you will find that highly fractal representational systems can be VERY sensitive to who is in which branch) but ultimately it swirls around until you have a “high council” of like 7 people such that almost everyone in the community thinks at least one of them is very very reasonable.
Then anything the 7 agree on can just be treated as “consensus”! Maybe?
Also, 7*6/2==21 bilateral conversations to get a “new theorem into the canon” is much much much smaller than something crazy big, like 500*499/2==124,750 conversations <3
This is little bit ironic: I think your comment would have been better if it had just started with “when a group of nodes needs to be in consensus”, without the preceding 1000 words. (But the part about conflicts due to the costs of cultivating consensus was really insightful, thanks!)