Educated as an engineer, I’ve been a product designer & product designer manager, a factory & tool room manager, a logistics & supply chain consultant, a programmer & software consultancy manager, and a director of a handful of startups. I’m one of the Bellroy founders & executive team members.
We humans don’t come with instruction manuals, but if we did they’d have to mention: systematic habit acquisition, conceptual model building, polyphasic sleep, speed reading, optimised exercise & diet and many other uncommonly adopted but excellent things.
My company, TrikeApps, was responsible for the first Lesswrong codebase. I’m very happy to have been involved, and very happy with what our successors have done with the current Lesswrong codebase (for the avoidance of any undue credit, we did not contribute to the new codebase, and would have been proud of the result if we had).
I’m trying to apply the ITT to your position, and I’m pretty sure I’m failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position:
My background assumptions (not stated or endorsed by you):
Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then…
My agreement with the value that I think you’re chasing:
… I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value.
Further elaboration of my background assumptions:
If (a) (clear interpretation) is missing, then the reader won’t know there’s value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered.
If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn’t eat breakfast this morning. This is clear and true, but I really don’t expect to be rewarded for sharing it.
If (c) (evident truth) is missing, then either (not evident) you don’t know whether to reward the contribution or not, or (not true) surely the value is negative?
My statement of my confusion:
Now, you didn’t state these three conditions, so you obviously get to reject my claim of their importance… yet I’ve pretty roundly convinced myself that they’re important, and that (absent some very clever but probably nit-picky edge case, which I’ve been around Lesswrong long enough to know is quite likely to show up) you’re likely to agree (other readers should note just how wildly I’m inferring here, and if Vladimir_Nesov doesn’t respond please don’t assume that they actually implied any of this). You also report that you upvoted orthonormal’s comment (I infer orthonormal’s comment instead of RyanCarey’s, because you quoted “30 points of karma”, which didn’t apply to RyanCarey’s comment). So I’m trying to work out what interpretation you took from orthonormal’s comment (and the clearest interpretation I managed to find is the one I detailed in my earlier comment: that orthonormal based their opinion overwhelmingly on their first impression and didn’t update on subsequent data), whether you think the comment shared relevant data (did you think orthonormal’s first impression was valuable data pertaining to whether Leverage and Geoff were bad? did you think the data relevant to some other valuable thing you were tracking, that might not be what other readers would take from the comment?), and whether you think that orthonormal’s data was self-evidently true (do you have other reason to believe that orthonormal’s first impressions are spectacular? did you see some other flaw in the reasoning I my earlier comment?)
So, I’m confused. What were you rewarding with your upvote? Were you rewarding (orthonormal’s) behaviour, that you expect will be useful to you but misleading for others, or rewarding behaviour that you expect would be useful on balance to your comment’s readers (if so, what and how)? If my model is just so wildly wrong that none of these questions make sense to answer, can you help me understand where I fell over?
(To the inevitable commenter who would, absent this addition, jump in and tell me that I clearly don’t know what an ITT is: I know that what I have written here is not what it looks like to try to pass an ITT — I did try, internally, to see whether I could convince myself that I could pass Vladimir_Nesov’s ITT, and it was clear to me that I could not. This is me identifying where I failed — highlighting my confusion — not trying to show you what I did.)
Edit 6hrs after posting: formatting only (I keep expecting Github Flavoured Markdown, instead of vanilla Markdown).