If this were my introduction to LW, I’d snort and go away. Or maybe stop to troll for a bit—this intro is soooo easy to make fun of.
Well, glad you didn’t choose the first option, then.
If this were my introduction to LW, I’d snort and go away. Or maybe stop to troll for a bit—this intro is soooo easy to make fun of.
Well, glad you didn’t choose the first option, then.
The catch-22 I would expect with CFAR’s efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.
The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical “I always try to learn from my mistakes and improve my thinking”.
The latter mindset is the one most urgently needing actual improvements, but its bearers won’t flock to CFAR unless it has gained acceptance as an institution with which you can virtue-signal (which can confer status). While some universities manage to walk that line (providing status affirmation while actually conferring knowledge), CFAR’s mode of operation would optimally entail “virtue-signalling ML students in on one side”, “rationality-improved ML students out on the other side”, which is a hard sell, since signalling an improvement in rationality will always be cheaper than the real thing (as it is quite non-obvious to tell the difference for the uninitiated).
What remains is helping those who have already taken that most important step of effective self-reflection and are looking for further improvement. A laudable service to the community, but probably far from changing general attitudes in the field.
Taking off the black hat, I don’t have a solution to this perceived conundrum.
Climate change, while potentially catastrophic, is not an x-risk. Nuclear war is only an x-risk for a subset of scenarios.
The scarier thought is how often we’re manipulated that way when people don’t bungle their jobs. The few heuristics we use to identify such mischief are trivially misled (for example, establishing plausibility by posting on inconsequential other topics (at least on LW that incurs a measurable cognitive footprint, which is however not the case on, say, Reddit), and then there’s always Poe’s law to consider). Shills man, shills everywhere!
As they dictum goes, just cuz you’re paranoid …
Reminds me of Ernest Hemingway’s apparent paranoid delusions of being under FBI surveillance … only eventually it turned out he actually was. Well, at least if my family keep playing their roles well enough, from a functional blackbox perspective the distinction may not matter that much anyways. I wonder how they got the children to be such good actors, though. Mind chip implants?
As an aside, it’s kind of curious that Prof. Tsipursky does his, let’s say “social engineering”, under his real name.
Anyways, good entertainment. Though on this forum, it’s more of a guilty pleasure (drama is but a weed in our garth of rationality).
Disclaimer: Only spent 20 minutes on this, so it might be incomplete, or you may already have addressed some of the following points:
At first glance, John Lowe authored 2 pubmed-listed papers on the topic.
The first of which in an open journal with no peer review (Med. Hypotheses) which has also published stuff on e.g. AIDS denialism. From his paper: “We propose that molecular biological methods can provide confirmatory or contradictory evidence of a genetic basis of euthyroid FS [Fibromyalgia Syndrome].” That’s it. Proposing a hypothesis, not providing experimental evidence, paper ends.
The second paper was published in a somewhat controversial low impact journal (at least peer-reviewed). However, this apparently one and only peer reviewed and published paper actually contradicts the expected results, Lowe pulls off a somewhat convoluted move to save his hypothesis:
“TSH, FT3, or FT4 did not correlate with RMR [Resting Metabolic Rate] values. For two reasons, however, ITHR [Inadequate Thyroid Hormone Regulation] cannot be ruled out as the mechanism of FM [Fibromyalgia] patients’ lower RMRs: (1) TSH, FT3 , and FT4 levels have not been shown to reliably correlate with RMR values, and (2) these tests evaluate only pituitary-thyroid axis function and cannot rule out central HO and PRTH.”
Yea …
In addition, lots of crank signs: Lowe’s review from 2008, along with his other writings, is “published” in a made-up “journal” which still lists him (from beyond the grave, apparently) as the editor-in-chief.
No peer review, pretending to be an actual journal, a plethora of commercial sites citing him and his research … honi soit qui mal y pense!
I wonder if / how that win will affect estimates on the advent of AGI within the AI community.
You got me there!
Please don’t spam the same comment to different threads.
Hey! Hey. He. Careful there, a propos word inflation. It strikes with a force of no more than one thousand atom bombs.
Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?
Sounds as good a reason as any!
maybe we should shut down LW
I’m not sure how much it counts, but I bet Chief Ramsay would’ve shut it down long ago. Betting is good, I’ve learned.
As seen in the first episode series Caprica, quoth Zoe Graystone:
“(...) the information being held in our heads is available in other databases. People leave more than footprints as they travel through life; medical scans, dna profiles, psych evaluations, school records, emails, recording, video, audio, cat scans, genetic typing, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, tv shows… even prescriptions for birth control.”
I, for one, think that the meme-mix defining our identity in itself could capture (predict) our behavior in large parts, foregoing biographical minutiae. Bonesaw in Worm didn’t need precise memories to recreate the Slaughterhouse Nine clones.
Many think we can zoom out from atoms to a connectome, why not zoom out from a connectome to the memes it implements?
“Mind” is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.
That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.
The data doesn’t care what importance you ascribe to it. It’s not as if, say, “power”, automatically comes with “hard to describe computationally”. On the contrary, allowing for a function to do arbitrary code changes is easier to implement that defining precise power limitations (see constraining an AI’s utility function).
Then there’s the sheer number of mind-phenomena, are you suggesting adding one by necessity increases complexity? In fact, removing one can increase it as well: If I were to describe a reality in which ceteris is paribus, with the exception of your mind not actually being a mind, then by removing a mind I would have increased overall complexity. Not even taking into account that there are plenty of mind-templates around already (implicitly, since KC, even though uncomputable, is optimal), and that for complexity considerations, adding another of a template isn’t even adding much, necessarily (I’m aware that adding just a few bits already comes with a steep penalty, this comment isn’t meant to be exhaustive). See also the alphabet example further on.
Then there’s the illusion that somehow our universe is of low complexity just because the physical laws governing the transition between time-steps are simple. That is mistaken. If we just look at the laws, and start with a big bang that is not precisely informationally described, we get a multiverse host of possible universes with our universe not in the beginning, which goes counter the KC demands. You may say “I don’t care, as long as our universe is somewhere in the output, that’s fine”. But then I propose an even simpler theory of everything: Output a long enough sequence of Pi, and you eventually get our universe somewhere down the line as well. So our universe’s actual complexity is enourmous, down to atoms in a stone on a hill on some moon somewhere in the next galaxy. There exists a clear trade-off between explanatory power and conciseness. I used to link an old Hutter lecture on that latter topic a few years ago, I can dig it out if you’d like. (ETA: See for example the paragraph labeled “A” on page 6 in this paper of his).
The old argument that |”universe + mind”| > |”universe”| is simplistic and ill-applied. Unlike with probabilities, the sequence ABCDABCDABCDABCD can be less complex than ABCDABCDABCDABC.
The list goes on, if you want to focus on some aspect of it we can go into greater depth on that. Bottom line is, if there’s a slam dunk case, I don’t see it.
LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question.
Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I’m an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.
This may be the wrong comment chain to get into it, and your grandstanding doesn’t exactly signal an immediate willingness to engage in medias res, so I won’t elaborate for the moment (unless you want me to).
If you’re looking for gullible recruits, you’ve come to the wrong place.
Don’t lease the Ferrari just yet.
What are you talking about?
History can be all things to all people, like the shape of a cloud it’s a canvas on which one can project nearly any narrative one fancies.
Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.
Good content, however I’d have preferred “You Are A Mind” or similar. You are an emergent system centered on the brain and influences upon it, or somesuch. It’s just that “brain” has come to refer to 2 distinct entities—the anatomical brain, and then the physical system generating your self. The two are not identical.
Well, I must say my comment’s belligerence-to-subject-matter ratio is lower than yours. “Stamped out”? Such martial language, I can barely focus on the informational content.
The infantile nature of my name calling actually makes it easier to take the holier-than-thou position (which my interlocutor did, incidentally). There’s a counter-intuitive psychological layer to it which actually encourages dissent, and with it increases engagement on the subject matter (your own comment nonwithstanding). With certain individuals at least, which I (correctly) deemed to be the case in the original instance.
In any case, comments on tone alone would be more welcome if accompanied with more remarks on the subject matter itself. Lastly, this was my first comment in over 2 months, so thanks for bringing me out of the woodwork!
I do wish that people were more immune to the allure of drama, lest we all end up like The Donald.
Certainly, within what’s Good (tm) and Acceptable (tm), funding better education in the third world is the most effective method.
However, if you go far enough outside the Overton window, you don’t need credibility, as long as the power asymmetry is big enough. You want food? It only comes with a chemical agent which sterilizes you, similar to Golden Rice. You don’t need to accept it, you’re free to starve. The failures of colonialism as well as the most recent forays into the middle east stem from the constraints of also having to placate the court of public opinion.
Regardless of this one example, are you taking the position of “the most effective methods are those within the Overton window”? That would be typical, but the actual question would be: Is it because changing the Overton window to include more radical options is too hard, or is it because those more radical options wouldn’t feel good?
… and there is only one choice I’d expect them to make, in other words, no actual decision at all.