(Not to be confused with the Trevor who works at Open Phil)
trevor
The content/minute rate is too low, it follows 1960s film standards where audiences weren’t interested in science fiction films unless concepts were introduced to them very very slowly (at the time they were quite satisfied by this due to lower standards, similar to Shakespeare).
As a result it is not enjoyable (people will be on their phones) unless you spend much of the film either thinking or talking with friends about how it might have affected the course of science fiction as a foundational work in the genre (almost every sci-fi fan and writer at the time watched it).
Tenet (2020) by George Nolan revolves around recursive thinking and responding to unreasonably difficult problems. Nolan introduces the time-reversed material as the core dynamic, then iteratively increases the complexity from there, in ways specifically designed to ensure that as much of the audience as possible picks up as much recursive thinking as possible.
This chart describes the movement of all key characters plot elements through the film; it is actually very easy to follow for most people. But you can also print out a bunch of copies and hand them out before the film (it isn’t a spoiler so long as you don’t look closely at the key).
Most of the value comes from Eat the Instructions-style mentality, as both the characters and the viewer pick up on unconventional methods to exploit the time reversing technology, only to be shown even more sophisticated strategies and are walked through how they work and their full implications.
It also ties into scope sensitivity, but it focuses deeply on the angles of interfacing with other agents and their knowledge, and responding dynamically to mistakes and failures (though not anticipating them), rather than simply orienting yourself to mandatory number crunching.
The film touches on cooperation and cooperation failures under anomalous circumstances, particularly the circumstances introduced by the time reversing technology.
The most interesting of these was also the easiest to miss:
The impossibility of building trust between the hostile forces from the distant future and the characters in the story who make up the opposition faction. The antagonist, dying from cancer and selected because his personality was predicted to be hostile to the present and sympathetic to the future, was simply sent instructions and resources from the future, and decided to act as their proxy in spite of ending up with a great life and being unable to verify their accuracy or the true goals of the hostile force. As a result, the protagonists of the story ultimately build a faction that takes on a life of its own and dooms both their friends and the entire human race to death by playing a zero sum survival game with the future faction, due to their failure throughout the film to think sufficiently laterally and their inadequate exploitation of the time-reversing technology.
Screen arrangement suggestion: Rather than everyone sitting in a single crowd and commenting on the film, we split into two clusters, one closer to the screen and one further.
The people in the front cluster hope to watch the film quietly, the people in the back cluster aim to comment/converse/socialize during the film, with the common knowledge that they should aim to not be audible to the people in the front group, and people can form clusters and move between them freely.
The value of this depends on what film is chosen; eg “A space Odyssey” is not watchable without discussing historical context and “Tenet” ought to have some viewers wanting to better understand the details of what time travelly thing just happened.
“All the Presidents Men” by Alan Paluka
“Oppenheimer” by George Nolan
“Tenet” by George Nolan
I’m not sure what to think about this, Thomas777′s approach is generally a good one but for both of these examples, a shorter route (that it’s cleanly mutually understood to be adding insult to injury as a flex by the aggressor) seems pretty probable. Free speech/censorship might be a better example as plenty of cultures are less aware of information theory and progress.
I don’t know what proportion of the people in the US Natsec community understand ‘rigged psychological games’ well enough to occasionally read books on the topic, but the bar is pretty low for hopping onto fads as tricks only require one person to notice or invent them and then they can simply just get popular (with all kinds of people with varying capabilities/resources/technology and bandwidth/information/deffciencies hopping on the bandwagon).
I notice that there’s just shy of 128 here and they’re mostly pretty short, so you can start the day by flipping a coin 7 times to decide which one to read. Not a bisection search, just convert the seven flips to binary and pick the corresponding number. At first, you only have to start over and do another 7 flips if you land on 1111110 (126), 1111111 (127), or 0000000 (128).
If you drink coffee in the morning, this is a way better way to start the day than social media, as the early phase of the stimulant effect reinforces behavior in most people. Hanson’s approach to various topics is a good mentality to try boosting this way.
This reminds me of dath ilan’s hallucination diagnosis from page 38 of Yudkowsky and Alicorn’s glowfic But Hurting People Is Wrong.
It’s pretty far from meeting dath ilan’s standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone’s ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.
That’s this week, right?
Is reading An Intuitive Explanation of Bayes Theorem recommended?
I agree that “general” isn’t such a good word for humans. But unless civilization was initiated right after the minimum viable threshold was crossed, it seems somewhat unlikely to me that humans were very representative of the minimum viable threshold.
If any evolutionary process other than civilization precursors formed the feedback loop that caused human intelligence, then civilization would hit full swing sooner if that feedback loop continued pushing human intelligence further. Whether Earth took a century or a millennia between the harnessing of electricity and the first computer was heavily affected by economics and genetic diversity (e.g. Babbage, Lovelace, Turing), but afaik a “minimum viable general intelligence” could plausibly have taken millions or even billions of years under ideal cultural conditions to cross that particular gap.
This is an idea and NOT a recommendation. Unintended consequences abound.
Have you thought about sorting into groups based on carefully-selected categories? For example, econ/social sciences vs quant background with extra whiteboard space, a separate group for new arrivals who didn’t do the readings from the other weeks (as their perspectives will have less overlap), a separate group for people who deliberately took a bunch of notes and made a concise list vs a more casual easygoing group, etc?
Actions like these leave scars on entire communities.
Do you have any idea how fortunate you were to have so many people in your life who explicitly tell you “don’t do things like this”? The world around you has been made so profoundly, profoundly conducive to healing you.
When someone is this persistent in thinking of reasons to be aggressive AND reasons to not evaluate the world around them, it’s scary and disturbing. I understand that humans aren’t very causally upstream of their decisions, but this is the case for everyone, and situations like these go a long way towards causing people like Duncan and Eliezer to fear meeting their fans.
I’m greatful that looking at this case has helped me formalize a concept of oppositional drive, a variable representing the unconscious drive to oppose other humans with justifications layered on top based on intelligence (a separate variable). Children diagnosed with Oppositional Defiant disorder is the DSM-5′s way of mitigating the harm when a child has an unusually strong oppositional drive for their age, but that’s because the DSM puts binary categorizations on traits that are actually better represented as variables that in most people are so low as to not be noticed (and some people are in the middle, unusually extreme cases get all the attention, this was covered in this section of Social Dark Matter which was roughly 100% of my inspiration).
Opposition is… a rather dangerous thing for any living being to do, especially if your brain conceals/obfuscates the tendency/drive whenever it emerges, so even most people in the orangey area probably disagree with having this trait upon reflection and would typically press a button to place themselves more towards the yellow. This is derived from the fundamental logic of trust (which in humans must be built as a complex project that revolves around calibration).
This could have been a post so more people could link it (many don’t reflexively notice that you can easily get a link to a Lesswrong quicktake or Twitter or facebook post by mousing over the date between the upvote count and the poster, which also works for tab and hotkey navigation for people like me who avoid using the mouse/touchpad whenever possible).
(The author sometimes says stuff like “US elites were too ideologically committed to globalization”, but I don’t think he provides great alternative policies.)
Afaik the 1990-2008 period featured government and military elites worldwide struggling to pivot to a post-Cold war era, which was extremely OOD for many leading institutions of statecraft (which for centuries constructed around the conflicts of the European wars then world wars then cold war).
During the 90′s and 2000′s, lots of writing and thinking was done about ways the world’s militaries and intelligence agencies, fundamentally low-trust adversarial orgs, could continue to exist without intent to bump each other off. Counter-terrorism was possibly one thing that was settled on, but it’s pretty well established that global trade ties were deliberately used as a peacebuilding tactic, notably to stabilize the US-China relationship (this started to fall apart after the 2008 recession brought anticipation of American economic/institutional decline scenarios to the forefront of geopolitics).
The thinking of period might not be very impressive to us, but foreign policy people mostly aren’t intellectuals and for generations had been selected based on office politics where the office revolved around defeating the adversary, so for many of them them it felt like a really big shift in perspective and self-image, sort of like a Renaissance. Then US-Russia-China conflict swung right back and got people thinking about peacebuilding as a ploy to gain advantage, rather than sane civilizational development. The rejection of e.g. US-China economic integration policies had to be aggressive because many elites (and people who care about economic growth) tend to support globalization, whereas many government and especially Natsec elites remember that period as naive.
It’s not a book, but if you like older movies, the 1944 film Gaslight is pretty far back (film production standards have improved quite a bit since then, so for a large proportion of people 40′s films are barely watchable, which is why I recommend this version over the nearly identical British version and the original play), and it was pretty popular among cultural elites at the time so it’s probably extremely causally upstream of most of the fiction you’d be interested in.
Writing is safer than talking given the same probability that both the timestamped keystrokes and the audio files are both kept.
In practice, the best approach is to handwrite your thoughts as notes, in a room without smart devices and with a door and walls that are sufficiently absorptive, and then type it out in the different room with the laptop (ideally with a USB keyboard so you don’t have to put your hands on the laptop and the accelerometers on its motherboard while you type).
Afaik this gets the best ratio of revealed thought process to final product, so you get public information exchanges closer to a critical mass while simultaneously getting yourself further from getting gaslight into believing whatever some asshole rando wants you to believe. The whole paradigm where everyone just inputs keystrokes into their operating system willy-nilly needs to be put to rest ASAP, just like the paradigm of thinking without handwritten notes and the paradigm of inward-facing webcams with no built-in cover or way to break the circuit.
TL;DR “habitually deliberately visualizing yourself succeeding at goal/subgoal X” is extremely valuable, but also very tarnished. It’s probably worth trying out, playing around with, and seeing if you can cut out the bullshit and boot it up properly.
Longer:
The universe is allowed to have tons of people intuitively notice that “visualize yourself doing X” is an obviously winning strategy that typically makes doing X a downhill battle if its possible at all, and so many different people pick it up that you first encounter it in an awful way e.g. in middle/high school you first hear about it but the speaker says, in the same breath, that you should use it to feel more motivated to do your repetitive math homework for ~2 hours a day.
I’m sure people could find all sorts of improvements e.g. an entire field of selfvisualizationmancy that provably helps a lot of people do stuff, but the important thing I’ve noticed is to simply not skip that critical step. Eliminate ugh fields around self-visualization or take whatever means necessary to prevent ugh fields from forming in your idiosyncratic case (also, social media algorithms could have been measurably increasing user retention by boosting content that places ugh fields in places that increase user retention by decreasing agency/motivation, with or without the devs being aware of this because they are looking at inputs and outputs or maybe just outputs, so this could be a lot more adversarial than you were expecting). Notice the possibility that it might or might not have been a core underlying dynamic in Yudkowsky’s old Execute by Default post or Scott Alexander’s silly hypothetical talent differential comment without their awareness.
The universe is allowed to give you a brain that so perversely hinges on self-image instead of just taking the action. The brain is a massive kludge of parallel processing spaghetti code and, regardless of whether or not you see yourself as a very social-status-minded person, the modern human brains was probably heavily wired to gain social status in the ancestral environment, and whatever departures you might have might be tearing down chesterton-schelling fences.
If nothing else, a takeaway from this was that the process of finding the missing piece that changes everything is allowed to be ludicrously hard and complicated, while the missing piece itself is simultaneously allowed to be very simple and easy once you’ve found it.
“Slipping into a more convenient world” is a good way of putting it; just using the word “optimism” really doesn’t account for how it’s pretty slippy, nor how the direction is towards a more convenient world.
It was more of a 1970s-90s phenomenon actually, if you compare the best 90s moves (e.g. terminator 2) to the best 60s movies (e.g. space odyssey) it’s pretty clear that directors just got a lot better at doing more stuff per second. Older movies are absolutely a window into a higher/deeper culture/way of thinking, but OOMs less efficient than e.g. reading Kant/Nietzsche/Orwell/Asimov/Plato. But I wouldn’t be surprised if modern film is severely mindkilling and older film is the best substitute.