Regarding that sentence, I edited my comment at about the same time you posted this.
But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.
If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.
So, much of my thread was respond to this sentence:
Implication: “judge” means to use information against someone.
The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.
i.e. say I have three options of what to think about today:
some random innocuous status quo thought (neither gets me rewarded nor punished)
some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I’m not sure what kind of world we’re stipulating here. In some “just”-worlds, this sort of thought gets punished (because it’s usually dumb). In some “just worlds” it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it’s hit or miss because there’s a collection of people trying different strategies with their rewards.
some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being “creatively out of the box” in a set of ways that are known to have pretty good returns.
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This isn’t that different from the status quo – it’s a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.
...
Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can’t implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.
the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow.
Average people don’t need to do it, someone needs to do it. The first target isn’t “make the whole world just”, it’s “make some local context just”. Actually, before that, it’s “produce common knowledge in some local context that the world is unjust but that justice is desirable”, which might actually be accomplished in this very thread, I’m not sure.
And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing.
Thanks for adding this information. I appreciate that you’re making these parts of your worldview clear.
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation.
This was most of what I meant to imply. I am mostly talking about rewards, not punishments.
I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.
You’re continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.
(Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)
Your comments don’t seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn’t seem very relevant.
(I recall claims on LessWrong that a decision process can do no worse with more information, but I don’t recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi’s claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by “just” since it sounds like you’re using it as a jargon term that’s meant to encapsulate more information than I’m receiving right now).
I’ve periodically mentioned that my arguments about “just worlds implemented on humans”. “Just worlds implemented on non-humans or augmented humans” might be quite different, and I think it’s worth talking about too.
But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.
Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.
The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I’m going to stop responding here.
I’m not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you’re making are at least less obvious than you think they are.
If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.
But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.
I have some probability on me being the confused one here.
In conversations like this, both sides are confused, that is don’t understand the other’s point, so “who is the confused one” is already an incorrect framing. One of you may be factually correct, but that doesn’t really matter for making a conversation work, understanding each other is more relevant.
(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica’s point is harder to follow and pattern-matches misleading things, hence the balance of votes.)
(I downvoted some of Jessica’s comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people’s perspectives, so in my case, I don’t really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)
In conversations like this, both sides are confused,
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something”, but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something” [...]
The clarification doesn’t address what I was talking about, or else disagrees with my point, so I don’t see how that can be characterised with a “Nod”. The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn’t go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.
Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused… on some other level that I couldn’t predict easily.’)
Regarding that sentence, I edited my comment at about the same time you posted this.
If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.
So, much of my thread was respond to this sentence:
The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.
i.e. say I have three options of what to think about today:
some random innocuous status quo thought (neither gets me rewarded nor punished)
some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I’m not sure what kind of world we’re stipulating here. In some “just”-worlds, this sort of thought gets punished (because it’s usually dumb). In some “just worlds” it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it’s hit or miss because there’s a collection of people trying different strategies with their rewards.
some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being “creatively out of the box” in a set of ways that are known to have pretty good returns.
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This isn’t that different from the status quo – it’s a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.
...
Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can’t implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.
Average people don’t need to do it, someone needs to do it. The first target isn’t “make the whole world just”, it’s “make some local context just”. Actually, before that, it’s “produce common knowledge in some local context that the world is unjust but that justice is desirable”, which might actually be accomplished in this very thread, I’m not sure.
Thanks for adding this information. I appreciate that you’re making these parts of your worldview clear.
This was most of what I meant to imply. I am mostly talking about rewards, not punishments.
I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.
You’re continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.
(Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)
Yes, I disagree with that point, and I feel like you’ve been missing the completely obvious point that bounded agents have limited capabilities.
Choices are costly.
Choices are really costly.
Your comments don’t seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn’t seem very relevant.
(I recall claims on LessWrong that a decision process can do no worse with more information, but I don’t recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi’s claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by “just” since it sounds like you’re using it as a jargon term that’s meant to encapsulate more information than I’m receiving right now).
I’ve periodically mentioned that my arguments about “just worlds implemented on humans”. “Just worlds implemented on non-humans or augmented humans” might be quite different, and I think it’s worth talking about too.
But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.
Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.
The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I’m going to stop responding here.
I’m not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you’re making are at least less obvious than you think they are.
If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.
But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.
In conversations like this, both sides are confused, that is don’t understand the other’s point, so “who is the confused one” is already an incorrect framing. One of you may be factually correct, but that doesn’t really matter for making a conversation work, understanding each other is more relevant.
(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica’s point is harder to follow and pattern-matches misleading things, hence the balance of votes.)
(I downvoted some of Jessica’s comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people’s perspectives, so in my case, I don’t really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something”, but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.
The clarification doesn’t address what I was talking about, or else disagrees with my point, so I don’t see how that can be characterised with a “Nod”. The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn’t go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.
Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused… on some other level that I couldn’t predict easily.’)