Writing this post as if it’s about AI risk specifically seems weirdly narrow.
It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who’ve tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Same on a smaller scale: people go into college/grad school with dreams of revolutionizing X. A few years later, they’re working on problems which will never realistically matter much, in order to reliably pump out papers which nobody will ever read. Or, new grads go into a new job at a big company, and immediately start proposing sweeping changes and giant projects to address whatever major problems the company has. A few years later, they’ve given up on that sort of thing and either just focus on their narrow job all the time or leave to found a startup.
Given how broad the pattern is, it seems rather ridiculous to pose this as a “trauma” of the older generation. It seems much more like the older generation just has more experience, and has updated toward straightforwardly more correct views of how the world works.
Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”
Also… seriously, you think that just came from being ignored about AI? How about that whole covid thing?? It’s not like we’re extrapolating from just one datapoint here.
If someone older tells you “There is nothing you can do to address AI risk, just give up”, maybe don’t give up. Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.
My actual advice here would be: first, nobody ever actually advises just giving up. I think the thing which is constantly misinterpreted as “there is nothing you can do” is usually pointing out that somebody’s first idea or second idea for how to approach alignment runs into some fundamental barrier. And then the newby generates a few possible patches which will not actually get past this barrier, and very useful advice at that point is to Stop Generating Solutions and just understand the problem itself better. This does involve the mental move of “giving up”—i.e. accepting that you are not going to figure out a viable solution immediately—but that’s very different from “giving up” in the strategic sense.
(More generally, the field as whole really needs to hold off on proposing solutions more, and focus on understanding the problem itself better.)
Writing this post as if it’s about AI risk specifically seems weirdly narrow.
I disagree. Parts 2-5 wouldn’t make sense to argue for a random other cause area that people go to college hoping to revolutionize. Parts 2-5 are about how AI is changing rapidly, and going to continue changing rapidly, and those changes result in changes to discourse, such that it’s more-of-a-mistake-than-for-other-areas to treat humanity as a purely static entity that either does or doesn’t take AI x-risk seriously enough.
By contrast, animal welfare is another really important area that kids go to college hoping to revolutionize and end up getting disillusioned, exactly as you describe. But the facts-on-the-ground and facts-being-discussed about animal welfare are not going to change as drastically over the next 10 years as the facts about AI. Generalizing the way you’re generalizing from other cause areas to AI is not valid, because AI is in fact going to be more impactful than most other things that ambitious young people try to revolutionize. Even arguments of the form “But gain of function research still hasn’t been banned” aren’t fully applicable, because AI is (I claim, and I suspect you believe) going to be more impactful than synthetic biology over the next ~10 years, and that impact creates opportunities for discourse that could be even more impactful than COVID was.
To be clear, I’m not trying to argue “everything is going to be okay because discourse will catch up”. I’m just saying that discourse around AI specifically is not as static as the FAE might lead one to feel/assume, and that I think the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
I agree parts 2-5 wouldn’t make sense for all the random cause areas, but they would for a decent chunk of them. CO2-driven climate change, for example, would have been an excellent fit for those sections about 10 years ago.
That said, insofar as we’re mainly talking about level of discourse, I at least partially buy your argument. On the other hand, the OP makes it sound like you’re arguing against pessimism about shifting institutions in general, which is a much harder problem than discourse alone (as evidenced by the climate change movement, for instance).
the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
The discourse that you’re referring to seems likely to be being Goodharted, so it’s not a good proxy for whether institutions will make sane decisions about world-ending AI technology. A test that would distinguish these variables would be to make logical arguments on a point that’s not widely accepted. If the response is updating or logical counterargument, that’s promising; if the response is some form of dismissal, that’s evidence the underlying generators of non-logic-processing are still there.
This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
It is evidence of that, but it’s not super strong, and in particular it doesn’t much distinguish between “the generators of why humanity was suicidally dismissive of information and reasoning have changed” from “some other more surface thing has changed, e.g. some low-fidelity public Zeitgeist has shifted which makes humans make a token obeisance to the Zeitgeist, but not in a way that implies that key decision makers will think clearly about the problem”. The above comment points out that we have other reason to think those generators haven’t changed much. (The latter hypothesis is a paranoid hypothesis, to be sure, in the sense that it claims there’s a process pretending to be a different process (matching at a surface level the predictions of an alternate hypothesis) but that these processes are crucially different from each other. But paranoid hypotheses in this sense are just often true.) I guess you could say the latter hypothesis also is “humanity changing, and adapting somewhat to the circumstances presented by AI development”, but it’s not the kind of “adaptation to the circumstances” that implies that now, reasoning will just work!
Yes, my experience of “nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent” doesn’t put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven’t yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.
It’s probably worth noting that I take the opposite update from the covid crisis: it was much easier to get governments listen to us and do marginally more sensible things than expected. With better preparation and larger resources, it would have been possible to cause order of magnitude more sensible things to happen. Also it’s worth noting some governments were highly sensible and agentic about covid
It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who’ve tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can’t.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field’s most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly “but the something something contra-agentic process” or “activism? are you some kind of terrorist?!”
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I’ve found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there’s no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they’re not taken seriously is not a serious attempt, and you’ll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it’s unclear. It’s weird to me that he doesn’t produce more content if he’s doing this full time.
places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math?
What? Eliezer took 2 years to write the sequences, during which he did very little direct alignment work. And in the years before that, SingInst was mostly an advocacy org, running the Singularity Summits.
That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
Given how broad the pattern is, it seems rather ridiculous to pose this as a “trauma” of the older generation. It seems much more like the older generation just has more experience, and has updated toward straightforwardly more correct views of how the world works.
Agree. The community has suffered from evaporative cooling to an extent, and it has become less welcoming of naive new ideas that were discussed many times before, much like any virtual community. This may appear as cynicism or trauma, but that’s from the perspective of folks just coming out of Plato’s cave into the bright sunlight. To them being told that the sun sets or can be eclipsed would also seem to be cynicism and trauma.
Writing this post as if it’s about AI risk specifically seems weirdly narrow.
It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who’ve tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Same on a smaller scale: people go into college/grad school with dreams of revolutionizing X. A few years later, they’re working on problems which will never realistically matter much, in order to reliably pump out papers which nobody will ever read. Or, new grads go into a new job at a big company, and immediately start proposing sweeping changes and giant projects to address whatever major problems the company has. A few years later, they’ve given up on that sort of thing and either just focus on their narrow job all the time or leave to found a startup.
Given how broad the pattern is, it seems rather ridiculous to pose this as a “trauma” of the older generation. It seems much more like the older generation just has more experience, and has updated toward straightforwardly more correct views of how the world works.
Also… seriously, you think that just came from being ignored about AI? How about that whole covid thing?? It’s not like we’re extrapolating from just one datapoint here.
My actual advice here would be: first, nobody ever actually advises just giving up. I think the thing which is constantly misinterpreted as “there is nothing you can do” is usually pointing out that somebody’s first idea or second idea for how to approach alignment runs into some fundamental barrier. And then the newby generates a few possible patches which will not actually get past this barrier, and very useful advice at that point is to Stop Generating Solutions and just understand the problem itself better. This does involve the mental move of “giving up”—i.e. accepting that you are not going to figure out a viable solution immediately—but that’s very different from “giving up” in the strategic sense.
(More generally, the field as whole really needs to hold off on proposing solutions more, and focus on understanding the problem itself better.)
I disagree. Parts 2-5 wouldn’t make sense to argue for a random other cause area that people go to college hoping to revolutionize. Parts 2-5 are about how AI is changing rapidly, and going to continue changing rapidly, and those changes result in changes to discourse, such that it’s more-of-a-mistake-than-for-other-areas to treat humanity as a purely static entity that either does or doesn’t take AI x-risk seriously enough.
By contrast, animal welfare is another really important area that kids go to college hoping to revolutionize and end up getting disillusioned, exactly as you describe. But the facts-on-the-ground and facts-being-discussed about animal welfare are not going to change as drastically over the next 10 years as the facts about AI. Generalizing the way you’re generalizing from other cause areas to AI is not valid, because AI is in fact going to be more impactful than most other things that ambitious young people try to revolutionize. Even arguments of the form “But gain of function research still hasn’t been banned” aren’t fully applicable, because AI is (I claim, and I suspect you believe) going to be more impactful than synthetic biology over the next ~10 years, and that impact creates opportunities for discourse that could be even more impactful than COVID was.
To be clear, I’m not trying to argue “everything is going to be okay because discourse will catch up”. I’m just saying that discourse around AI specifically is not as static as the FAE might lead one to feel/assume, and that I think the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
I agree parts 2-5 wouldn’t make sense for all the random cause areas, but they would for a decent chunk of them. CO2-driven climate change, for example, would have been an excellent fit for those sections about 10 years ago.
That said, insofar as we’re mainly talking about level of discourse, I at least partially buy your argument. On the other hand, the OP makes it sound like you’re arguing against pessimism about shifting institutions in general, which is a much harder problem than discourse alone (as evidenced by the climate change movement, for instance).
(Agree again)
To add:
The discourse that you’re referring to seems likely to be being Goodharted, so it’s not a good proxy for whether institutions will make sane decisions about world-ending AI technology. A test that would distinguish these variables would be to make logical arguments on a point that’s not widely accepted. If the response is updating or logical counterargument, that’s promising; if the response is some form of dismissal, that’s evidence the underlying generators of non-logic-processing are still there.
+1
To add:
It is evidence of that, but it’s not super strong, and in particular it doesn’t much distinguish between “the generators of why humanity was suicidally dismissive of information and reasoning have changed” from “some other more surface thing has changed, e.g. some low-fidelity public Zeitgeist has shifted which makes humans make a token obeisance to the Zeitgeist, but not in a way that implies that key decision makers will think clearly about the problem”. The above comment points out that we have other reason to think those generators haven’t changed much. (The latter hypothesis is a paranoid hypothesis, to be sure, in the sense that it claims there’s a process pretending to be a different process (matching at a surface level the predictions of an alternate hypothesis) but that these processes are crucially different from each other. But paranoid hypotheses in this sense are just often true.) I guess you could say the latter hypothesis also is “humanity changing, and adapting somewhat to the circumstances presented by AI development”, but it’s not the kind of “adaptation to the circumstances” that implies that now, reasoning will just work!
Not to say, don’t try talking with people.
Yes, my experience of “nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent” doesn’t put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven’t yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.
It’s probably worth noting that I take the opposite update from the covid crisis: it was much easier to get governments listen to us and do marginally more sensible things than expected. With better preparation and larger resources, it would have been possible to cause order of magnitude more sensible things to happen. Also it’s worth noting some governments were highly sensible and agentic about covid
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can’t.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field’s most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly “but the something something contra-agentic process” or “activism? are you some kind of terrorist?!”
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I’ve found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there’s no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they’re not taken seriously is not a serious attempt, and you’ll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it’s unclear. It’s weird to me that he doesn’t produce more content if he’s doing this full time.
What? Eliezer took 2 years to write the sequences, during which he did very little direct alignment work. And in the years before that, SingInst was mostly an advocacy org, running the Singularity Summits.
That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
Rob Miles is funded by the Long Term Future Fund at a roughly full-time salary: https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants#Robert_Miles___60_000_
That’s genuinely good news to me. However, he’s only made two videos in the past year? I’m not being accusatory, just confused.
He has also been helping a bunch of other people with video content creation. For example: https://www.youtube.com/c/RationalAnimations
Gotcha. That’s good to hear.
Agree. The community has suffered from evaporative cooling to an extent, and it has become less welcoming of naive new ideas that were discussed many times before, much like any virtual community. This may appear as cynicism or trauma, but that’s from the perspective of folks just coming out of Plato’s cave into the bright sunlight. To them being told that the sun sets or can be eclipsed would also seem to be cynicism and trauma.