Noticing I’ve been operating under a bias where I notice existential risk precursors pretty easily (EG, biotech, advances in computing hardware), but I notice no precursors of existential safety. To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can’t be right, surely?...
When I think about what they might be… I find only cultural technologies, or political conditions: the strength of global governance, the clarity of global discourses, perhaps the existence of universities. But that can’t be it. These are all low hanging fruit, things that already exist. Differential progress is about what could be made to exist.
Probably has something to do with the fact that a catastrophe is an event, and safety is an absence of something. It’s just inherently harder to point at a thing and say that it caused fewer catastrophes to happen. Show me the non-catastrophes. Bring them to me, put them on my table. You can’t do it.
I’d say it’s an aspect of negativity bias, where we focus more on the bad things than on the good things. It’s already happening in AI safety, and AI in general, so your bias is essentially a facet of negativity bias.
There’s a sense in which negativity bias is just rationality; you focus on the things you can improve, that’s where the work is. These things are sometimes called “problems”. The thing is, the healthy form of this is aware that the work can actually be done, so, should be very interested in, and aware of technologies of existential safety, and that is where I am and have been for a long time.
The problem is that focusing on a negative frame enabled by negativity bias will blind you to solutions, and is in general a great way to get depressed fast, which kills your ability to solve problems. Even more importantly, the problems might be imaginary, created by negativity biases.
It’s essentially a frame that views things in a negative light, or equivalently a frame that views a certain issue as by default negative unless action is taken.
For example, climate change can be viewed in the negative, which is that we have to solve the problem or we all die, or as a positive frame where we can solve the problem by green tech
I was hoping to understand why people who are concerned about the climate ignore greentech/srm.
One effect, is that people who want to raise awareness about the severity of an issue have an incentive to avoid acknowledging solutions to it, because that diminishes its severity. But this is an egregore-level phenomenon, there is no individual negative cognitive disposition that’s driving that phenomenon as far as I can tell. Mostly, in the case of climate, it seems to be driven by a craving for belonging in a political scene.
The point I was trying to make is that we click on and read negative news, and this skews our perceptions of what’s happening, and critically the negativity bias operates regardless of the actual reality of the problem, that is it doesn’t distinguish between the things that are very bad, just merely bad but solvable, and not bad at all.
In essence, I’m positing a selection effect, where we keep hearing more about the bad things, and hear less or none about the good things, so we are biased to believe that our world is more negative than it actually is.
And to connect it to the first comment, the reason you keep noticing precursors to existentially risky technology but not precursors existentially safe technology, or why this is happening:
To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can’t be right, surely?...
Is essentially an aspect of negativity bias because your information sources emphasize the negative over the positive news, no matter what reality looks like.
Some biotech contributes to existential risk but others doesn’t. A lot of vaccine technology doesn’t increase existential risk but reduces it because of reduced danger from viruses. Phage therapy is the same for reducing the risk from infectious bacteria.
LessWrong itself is a form of technology that’s intended to lead to existential risk reduction by facillitating a knowledge community to exist that otherwise wouldn’t.
The general idea of CFAR is that social technology they developed like double crux helps people to think more clearly and thus reduce existential risk.
Noticing I’ve been operating under a bias where I notice existential risk precursors pretty easily (EG, biotech, advances in computing hardware), but I notice no precursors of existential safety. To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can’t be right, surely?...
When I think about what they might be… I find only cultural technologies, or political conditions: the strength of global governance, the clarity of global discourses, perhaps the existence of universities. But that can’t be it. These are all low hanging fruit, things that already exist. Differential progress is about what could be made to exist.
Probably has something to do with the fact that a catastrophe is an event, and safety is an absence of something. It’s just inherently harder to point at a thing and say that it caused fewer catastrophes to happen. Show me the non-catastrophes. Bring them to me, put them on my table. You can’t do it.
I’d say it’s an aspect of negativity bias, where we focus more on the bad things than on the good things. It’s already happening in AI safety, and AI in general, so your bias is essentially a facet of negativity bias.
There’s a sense in which negativity bias is just rationality; you focus on the things you can improve, that’s where the work is. These things are sometimes called “problems”. The thing is, the healthy form of this is aware that the work can actually be done, so, should be very interested in, and aware of technologies of existential safety, and that is where I am and have been for a long time.
The problem is that focusing on a negative frame enabled by negativity bias will blind you to solutions, and is in general a great way to get depressed fast, which kills your ability to solve problems. Even more importantly, the problems might be imaginary, created by negativity biases.
What is a negative frame.
It’s essentially a frame that views things in a negative light, or equivalently a frame that views a certain issue as by default negative unless action is taken.
For example, climate change can be viewed in the negative, which is that we have to solve the problem or we all die, or as a positive frame where we can solve the problem by green tech
I was hoping to understand why people who are concerned about the climate ignore greentech/srm.
One effect, is that people who want to raise awareness about the severity of an issue have an incentive to avoid acknowledging solutions to it, because that diminishes its severity. But this is an egregore-level phenomenon, there is no individual negative cognitive disposition that’s driving that phenomenon as far as I can tell.
Mostly, in the case of climate, it seems to be driven by a craving for belonging in a political scene.
The point I was trying to make is that we click on and read negative news, and this skews our perceptions of what’s happening, and critically the negativity bias operates regardless of the actual reality of the problem, that is it doesn’t distinguish between the things that are very bad, just merely bad but solvable, and not bad at all.
In essence, I’m positing a selection effect, where we keep hearing more about the bad things, and hear less or none about the good things, so we are biased to believe that our world is more negative than it actually is.
And to connect it to the first comment, the reason you keep noticing precursors to existentially risky technology but not precursors existentially safe technology, or why this is happening:
Is essentially an aspect of negativity bias because your information sources emphasize the negative over the positive news, no matter what reality looks like.
The link where I got this idea is below:
https://archive.is/lc0aY
Some biotech contributes to existential risk but others doesn’t. A lot of vaccine technology doesn’t increase existential risk but reduces it because of reduced danger from viruses. Phage therapy is the same for reducing the risk from infectious bacteria.
LessWrong itself is a form of technology that’s intended to lead to existential risk reduction by facillitating a knowledge community to exist that otherwise wouldn’t.
The general idea of CFAR is that social technology they developed like double crux helps people to think more clearly and thus reduce existential risk.