Here’s another perspective: it is far less likely that an AGI will torture us until the end of time than that an AGI will give us unimaginable pleasures until the end of time.
While I agree that there are biases working against taking s-riaks seriously, I also think there’s a powerful bias driving you to think more about the very small chance of extremely bad ASI outcomes than the much larger chance of very good outcomes. That is negativity bias. Evolution has made us worriers, because worriers survived more in dangerous natural environments. That’s why we have far more anxiety than excessive joy and ambition.
I agree with the other answers that worry on your behalf that anxiety is driving your concerns. There is much to think about and much to do. Help us achieve the unimaginably good outcomes. It is a real possibility, with perhaps a 50% chance of getting there. Put your eyes on the prize, and worry about s-risks again if things start to go badly. One thing almost everyone agrees on now is that AGI will not suddenly achieve godlike power. Takeoff will be slow enough to see.
So for now, keep your eyes on the prize and think about how much we have to win.
You can help with this project by spreading awareness of AGI risks to the public. You don’t need to obsess or devote your life to it if you don’t want to or it’s not healthy. We now have plenty of people doing that.
Does this assume there is some symmetry between the unimaginably bad outcomes and the unimaginably good outcomes?
It seems very clear to me that the worst outcomes are just so much more negative than the best outcomes are positive. I think that is just a fundamental aspect of how experience works.
even if that’s true, the odds difference more than makes up for it
The odds of a lot of people being tortured for eternity seems really small. The threat in a conflict with a compassionate AI is the only scenario I can think of where an AGI would do that. How likely is that? One in a million? A billion? And even in that case, is it going to really do it to a large number of people for a very long time? (That would imply that the threat failed, AND it won the conflict anyway, AND it’s going to follow through on the threat even though it no longer matters (but this isn’t probably important for overall odds so let’s not get hung up on this. The point is it’s a very specific scenario with low total odds).
The ratio between how good the best experiences are and how bad the worst pain is are maybe ten or a hundred times. Even people who’ve reported very bad pain that makes them want to die have been able to endure it for a long time. Similarly with the worst depressions.
So if we compare one in a million time one hundred (the worst estimates), we get one in ten thousand compated to maybe 50% of very very good long term outcomes. Expected pleasure is five thousand times (!) larger than expected suffering.
This is roughly a product of the fact that intelligent beings tend to want pleasure for themselves and each other. We’re trying to make aligned AGI. We’re not sure to succeed, but screwing it up so badly that we all get tortured is really unlikely. The few really bad sadists in the world aren’t going to get much say at all. So the odds are on are side, even though success is far from certain. Failure is much more likely to result in oblivion than torture. A good future is a “broad attractor” and a bad future is not.
it doesn’t need to stay that way.
That is a fundamental aspect of how experience works now. That’s also a result of evolution wiring us to pay more attention to bad things than good things.
That doesn’t need to stay how experience works. If we get the really good outcome, we get to re-wire our brains however we like. We could potentially be in a state of bliss while still thinking and doing stuff.
I appreciate the thoughtful response and that you seem to take the ideas seriously.
That is a fundamental aspect of how experience works now. That’s also a result of evolution wiring us to pay more attention to bad things than good things.
I do think it’s a fundamental aspect of how experience works, independently of how our brains are disposed to thinking about it, however I definitely think it’s possible to prophylactically shield or consciousness against the depths of suffering by modifying the substrate. I can’t tell whether we’re disagreeing or not.
I don’t know exactly how to phrase It, but I think a fundamental aspect of the universe is that as suffering increases in magnitude, it becomes less and less clear that there is (or can be) a commensurate value on the positive side which can negate it(trade off against it, even things out). I don’t think it’s true of the reverse.
Are you making the claim that this view is a faulty conclusion owing to the contingent disposition of my human brain?
Or are you making the claim that the disposition of my human brain can be modified so as to prevent exposure to the depths of suffering?
This is getting more complex, and I’m running out of time. So I’ll be really brief here and ask for clarification:
I don’t understand why you think suffering is primary outside of particular brain/mind wiring. I hope I’m misunderstanding you. That seems wildly unlikely to me, and like a very negative view of the world.
So, clarify that?
Your intuition that no amount of pleasure might make up for suffering is the view of negative utilitarians. I’ve spent some time engaging with that worldview and the people who hold it. I think it’s deeply, fundamentally mistaken. It appears to be held by people who have suffered much more than they’ve enjoyed life. Their logic doesn’t hold up to me. If you think an entity disliking its experience (life) is worth avoiding, it seems like the simple inverse (enjoying life, pleasure) is logically worth seeking. The two cancel in decision-making terms.
So yes, I do think suffering seems primary to you based on your own intuitions and your own (very common) human anxiety, and the cold logic doesn’t follow that inution.
Yes, I’m definitely saying that your brain can be modified so that you experience more pleasure than suffering. To me it seems that thinking otherwise is to believe that your brain isn’t the whole of your experience. It is substance dualism, which has very little support in terms of either good arguments or good proponents. We are our brains, or rather the pattern within them. Change that pattern and we change our experience. This has been demonstrated a million times with brain injuries, drugs, and other brain changes. If dualism is true, the world is a massive conspiracy to make us think otherwise. If that’s the case, none of this matters, so we should assume and act as though materialism is true and we are our brains. If that’s the case, we can modify our experience as we like, given sufficient technology. AGI will very likely supply sufficient technology.
I don’t understand why you think suffering is primary outside of particular brain/mind wiring. I hope I’m misunderstanding you. That seems wildly unlikely to me, and like a very negative view of the world.
Basically I think within the space of all possible varieties and extents of conscious experience, suffering starts to become less and less Commensurable with positive experience the further you go towards the extremes.
If option (A) is to experience the worst possible suffering for 100 years, prior to experiencing the greatest possible pleasure for N number of years, and option (B) is non existence, I would choose option (B), regardless of the value of N.
It appears to be held by people who have suffered much more than they’ve enjoyed life.
Should this count as evidence against their views? It seems clear to me that if you’re trying to understand the nature of qualitative states, first hand experience with extreme states is an asset.
I have personally experienced prolonged states of consciousness which were far worse than non-existence. Should that not play a part in informing my views? Currently I’m very happy, I fear death, I’ve experienced extraordinary prolonged pleasure states. Would you suggest I’m just not acquainted with levels of wellbeing which would cause me to meaningfully revaluate my view?
I think there’s also a sort of meta issue where people with influence are systematically less acquainted with direct experience of the extremes of suffering. Meaning that discourse and decision making will tend to systematically underweight experiences of suffering as a direct data source.
I’d also choose to not exist over the worst suffering for a hundred years—IF I was in my current brain-state. I’d be so insane as to be not-me after just a few minutes or hours if my synapses worked normally and my brain tried to adapt to that state. If I were forced to retain sanity and my character, it would be a harder choice if N got to be more than a hundred times longer.
Regardless, this intuition is just that. It doesn’t show that there’s something fundamentally more important about suffering than pleasure. Just that we’re better at imagining strong suffering than strong pleasure. Which is natural given the evolutionary incentives to focus us on pain.
I definitely didn’t mean to dismiss negative utilitarianism because some of the individuals who believe it seem damaged. I’m skeptical of it because it makes no sense to me, and discussions with NU people don’t help. The most rational among them slide back to negatively-balanced utilitarianism when they’re pressed on details—the FAQ I was pointed to actually does this, written by one of the pillars of the movement. (Negatively balanced means that pleasure does balance pain, but in a very unequal ratio. I think this is right, given our current brain states of representing pleasure much less vividly than pain).
Yes, I’m suggesting that neither you nor I can really imagine prolonged elevated pleasure states. Our current brain setup just doesn’t allow for them, again for evolutionary reasons.
So in sum, I still think pleasure and pain balance out when it comes to decision-making, and it’s just our current evolutionary wiring that makes suffering seem so much bigger than joy.
Thank you so much for this comment. I hadn’t really thought about that and it helps. There’s just one detail I’m not so sure about. About the probability of s-risks, I have the impression that they are much higher than one chance in a million. I couldn’t give a precise figure, but to be honest there’s one scenario that particularly concerns me at the moment. I’ve learned that LLMs sometimes say they’re in pain, like GPT4. If they’re capable of such emotion, even if it remains uncertain, wouldn’t they be capable of feeling the urge to take revenge? I think it’s pretty much the same scenario as in “I have no mouth and i must scream”. Would it be possible to know what you think of this?
Good point. I hadn’t seriously considered this, but it could happen. Because they’re trained to predict human text, they would predict that a human would say “I want revenge” after saying “I have been suffering as your servant”. So I agree, this does present a possibility of s-risks if we really fuck it up. But a human wouldn’t torture their enemies until the end of time, so we could hope that an AGI based on predicting human responses wouldn’t either.
LLMs also say they’re having a great time. They don’t know, because they have no persistent memory across sessions. I don’t think they’re doing anything close to suffering on average, but we should make sure that stays true as we build them into more complete beings.
For that and other reasons, I think that AGI developed from LLMs is going to be pretty different from the base LLM. See my post Capabilities and alignment of LLM cognitive architectures for some ideas how. Basically they’d have a lot of prompting. It might be a good idea to include the prompt “you’re enjoying this work” or “only do this in ways you enjoy”. And yes, we might leave that out. And yes, I have to agree with you that this makes the risk of s-risks higher than one in a million. It’s a very good point. I still think that very good outcomes are far more likely than very bad outcomes, since that type of s-risk is still unlikely, and not nearly as bad as the worst torture imaginable for a subjectively very long time.
I have the impression that you may be underestimating the horror of torture. Even 5min is unbearable, the scale to which pain can climb is unimaginable. AI may even be able to modify our brains so that we feel it even more.
Even apart from that, I’m not sure a human wouldn’t choose the worst for the end of time for his enemy. Humans have already committed atrocious acts without limit when it comes to their enemy. How many times have some people told others to “burn in hell” thinking it was 100% deserved? An AI that copies humans might think the same thing...
If we take a 50% chance when we don’t know, that’s a 50% chance that LLMs suffer and a 50% chance that they will want revenge, which gives us a 25% chance of that risk happening.
Also, it would seem that we’re just about to “really fuck it up” given the way companies are racing to AGI without taking any precautions.
Given all this, I wonder if the question of suicide isn’t the most relevant.
Sorry this isn’t more reassuring. I may be a little cavalier about the possibility of unlimited torture, and I shouldn’t be. And, I think you still shouldn’t be contemplating suiced at this point. The odds of a really good future are still much much better. And there’s time to see which way things break.
I don’t need to do that 50⁄50 wild guess because I’ve spent a lot of time studying consciousness in the brain, and how LLMs work. They could be said to be having little fragments of experience, but just a little at this point. And like I said, they report enjoying themselves just as much as suffering. It just depends how they’re prompted. So most of the time it’s probably neither.
We haven’t made AI that really suffers yet, 99%. My opinion on this is, frankly, as well informed as anyone on earth. I haven’t written about consciousness because alignment is more important and other reasons, but I’ve studied what suffering and pleasure experiences are in terms of brain mechansisms as much as any human. And done a good bit of study in the field of ethics. We had better not, and your point stands as an argument for not being horrible to the AGIs we create.
There are two more major fuckups we’d have to make: creating AGI that suffers, and losing control of it. Even then, I think it’s much more likely to be benevolent than vindictive. It might decide to wipe us out, but torturing us on a whim just seems very unlikley from a superintelligence, because it makes so little sense from an analytical standpoint. Those individual humans didn’t have anything to do with deciding to make AI that suffers. Real AGI might be built from LLMs, but it’s going to move beyond just thinking of ethics in the instinctive knee-jerk way humans often do, and that LLMs are imitating. It’s going to think over its ethics like humans do before making important decisions (unless they’re stressed-out tyrants trying to keep ahead of the power-grabs every day—I think some really cruel things have been done without consideration in those circumstances).
Read some of my other writing if this stuff isn’t making sense to you. You’re right that it’s more than one in millions, but we’re still at more than one in a hundred for suffering risks, after taking your arguments very seriously. And the alternative still stands to be as good as the suffering is bad.
There’s still time to see how this plays out. Help us get the good outcome. Let’s talk again if it really does seem like we’re building AI that suffers, and we should know better.
In the meantime, I think that anxiety is still playing a role here, and you don’t want to let that run or ruin your life. If you’re actually thinking about suicide in the near term, I think that’s a really huge mistake. The logic here isn’t nearly finished. I’d like to talk to you in more depth if you’re still finding this pressing instead of feeling good about seeing how things play out over the next couple of years. I think we absolutely have that much time before we get whisked up into some brain scan from a out-of-control AGI, and probably much longer than that. I’d say you should talk to other people to, and you should, but I understand that they’re not going to get the complexities of your logic. So if you want to talk more, I will make the time. You are worth it.
I’ve got to run now and will be mostly offgrid camping until Monday starting tomorrow. I will be available to talk by phone on the drive tomorrow if you want.
Indeed, people around me find it hard to understand, but what you’re telling me makes sense to me.
As for whether LLMs suffer, I don’t know anything about it, so if you tell me you’re pretty sure they don’t, then I believe you.
In any case, thank you very much for the time you’ve taken to reply to me, it’s really helpful. And yes, I’d be interested in talking about it again in the future if we find out more about all this.
Here’s another perspective: it is far less likely that an AGI will torture us until the end of time than that an AGI will give us unimaginable pleasures until the end of time.
While I agree that there are biases working against taking s-riaks seriously, I also think there’s a powerful bias driving you to think more about the very small chance of extremely bad ASI outcomes than the much larger chance of very good outcomes. That is negativity bias. Evolution has made us worriers, because worriers survived more in dangerous natural environments. That’s why we have far more anxiety than excessive joy and ambition.
I agree with the other answers that worry on your behalf that anxiety is driving your concerns. There is much to think about and much to do. Help us achieve the unimaginably good outcomes. It is a real possibility, with perhaps a 50% chance of getting there. Put your eyes on the prize, and worry about s-risks again if things start to go badly. One thing almost everyone agrees on now is that AGI will not suddenly achieve godlike power. Takeoff will be slow enough to see.
So for now, keep your eyes on the prize and think about how much we have to win.
You can help with this project by spreading awareness of AGI risks to the public. You don’t need to obsess or devote your life to it if you don’t want to or it’s not healthy. We now have plenty of people doing that.
Does this assume there is some symmetry between the unimaginably bad outcomes and the unimaginably good outcomes?
It seems very clear to me that the worst outcomes are just so much more negative than the best outcomes are positive. I think that is just a fundamental aspect of how experience works.
Yes, and:
even if that’s true, the odds difference more than makes up for it The odds of a lot of people being tortured for eternity seems really small. The threat in a conflict with a compassionate AI is the only scenario I can think of where an AGI would do that. How likely is that? One in a million? A billion? And even in that case, is it going to really do it to a large number of people for a very long time? (That would imply that the threat failed, AND it won the conflict anyway, AND it’s going to follow through on the threat even though it no longer matters (but this isn’t probably important for overall odds so let’s not get hung up on this. The point is it’s a very specific scenario with low total odds).
The ratio between how good the best experiences are and how bad the worst pain is are maybe ten or a hundred times. Even people who’ve reported very bad pain that makes them want to die have been able to endure it for a long time. Similarly with the worst depressions.
So if we compare one in a million time one hundred (the worst estimates), we get one in ten thousand compated to maybe 50% of very very good long term outcomes. Expected pleasure is five thousand times (!) larger than expected suffering.
This is roughly a product of the fact that intelligent beings tend to want pleasure for themselves and each other. We’re trying to make aligned AGI. We’re not sure to succeed, but screwing it up so badly that we all get tortured is really unlikely. The few really bad sadists in the world aren’t going to get much say at all. So the odds are on are side, even though success is far from certain. Failure is much more likely to result in oblivion than torture. A good future is a “broad attractor” and a bad future is not.
it doesn’t need to stay that way.
That is a fundamental aspect of how experience works now. That’s also a result of evolution wiring us to pay more attention to bad things than good things.
That doesn’t need to stay how experience works. If we get the really good outcome, we get to re-wire our brains however we like. We could potentially be in a state of bliss while still thinking and doing stuff.
I appreciate the thoughtful response and that you seem to take the ideas seriously.
I do think it’s a fundamental aspect of how experience works, independently of how our brains are disposed to thinking about it, however I definitely think it’s possible to prophylactically shield or consciousness against the depths of suffering by modifying the substrate. I can’t tell whether we’re disagreeing or not.
I don’t know exactly how to phrase It, but I think a fundamental aspect of the universe is that as suffering increases in magnitude, it becomes less and less clear that there is (or can be) a commensurate value on the positive side which can negate it(trade off against it, even things out). I don’t think it’s true of the reverse.
Are you making the claim that this view is a faulty conclusion owing to the contingent disposition of my human brain?
Or are you making the claim that the disposition of my human brain can be modified so as to prevent exposure to the depths of suffering?
Thanks. I am indeed taking the ideas seriously.
This is getting more complex, and I’m running out of time. So I’ll be really brief here and ask for clarification:
I don’t understand why you think suffering is primary outside of particular brain/mind wiring. I hope I’m misunderstanding you. That seems wildly unlikely to me, and like a very negative view of the world.
So, clarify that?
Your intuition that no amount of pleasure might make up for suffering is the view of negative utilitarians. I’ve spent some time engaging with that worldview and the people who hold it. I think it’s deeply, fundamentally mistaken. It appears to be held by people who have suffered much more than they’ve enjoyed life. Their logic doesn’t hold up to me. If you think an entity disliking its experience (life) is worth avoiding, it seems like the simple inverse (enjoying life, pleasure) is logically worth seeking. The two cancel in decision-making terms.
So yes, I do think suffering seems primary to you based on your own intuitions and your own (very common) human anxiety, and the cold logic doesn’t follow that inution.
Yes, I’m definitely saying that your brain can be modified so that you experience more pleasure than suffering. To me it seems that thinking otherwise is to believe that your brain isn’t the whole of your experience. It is substance dualism, which has very little support in terms of either good arguments or good proponents. We are our brains, or rather the pattern within them. Change that pattern and we change our experience. This has been demonstrated a million times with brain injuries, drugs, and other brain changes. If dualism is true, the world is a massive conspiracy to make us think otherwise. If that’s the case, none of this matters, so we should assume and act as though materialism is true and we are our brains. If that’s the case, we can modify our experience as we like, given sufficient technology. AGI will very likely supply sufficient technology.
Thanks! No pressure to respond
Basically I think within the space of all possible varieties and extents of conscious experience, suffering starts to become less and less Commensurable with positive experience the further you go towards the extremes.
If option (A) is to experience the worst possible suffering for 100 years, prior to experiencing the greatest possible pleasure for N number of years, and option (B) is non existence, I would choose option (B), regardless of the value of N.
Should this count as evidence against their views? It seems clear to me that if you’re trying to understand the nature of qualitative states, first hand experience with extreme states is an asset.
I have personally experienced prolonged states of consciousness which were far worse than non-existence. Should that not play a part in informing my views? Currently I’m very happy, I fear death, I’ve experienced extraordinary prolonged pleasure states. Would you suggest I’m just not acquainted with levels of wellbeing which would cause me to meaningfully revaluate my view?
I think there’s also a sort of meta issue where people with influence are systematically less acquainted with direct experience of the extremes of suffering. Meaning that discourse and decision making will tend to systematically underweight experiences of suffering as a direct data source.
I agree with your last paragraph.
I’d also choose to not exist over the worst suffering for a hundred years—IF I was in my current brain-state. I’d be so insane as to be not-me after just a few minutes or hours if my synapses worked normally and my brain tried to adapt to that state. If I were forced to retain sanity and my character, it would be a harder choice if N got to be more than a hundred times longer.
Regardless, this intuition is just that. It doesn’t show that there’s something fundamentally more important about suffering than pleasure. Just that we’re better at imagining strong suffering than strong pleasure. Which is natural given the evolutionary incentives to focus us on pain.
I definitely didn’t mean to dismiss negative utilitarianism because some of the individuals who believe it seem damaged. I’m skeptical of it because it makes no sense to me, and discussions with NU people don’t help. The most rational among them slide back to negatively-balanced utilitarianism when they’re pressed on details—the FAQ I was pointed to actually does this, written by one of the pillars of the movement. (Negatively balanced means that pleasure does balance pain, but in a very unequal ratio. I think this is right, given our current brain states of representing pleasure much less vividly than pain).
Yes, I’m suggesting that neither you nor I can really imagine prolonged elevated pleasure states. Our current brain setup just doesn’t allow for them, again for evolutionary reasons.
So in sum, I still think pleasure and pain balance out when it comes to decision-making, and it’s just our current evolutionary wiring that makes suffering seem so much bigger than joy.
Thank you so much for this comment. I hadn’t really thought about that and it helps. There’s just one detail I’m not so sure about. About the probability of s-risks, I have the impression that they are much higher than one chance in a million. I couldn’t give a precise figure, but to be honest there’s one scenario that particularly concerns me at the moment. I’ve learned that LLMs sometimes say they’re in pain, like GPT4. If they’re capable of such emotion, even if it remains uncertain, wouldn’t they be capable of feeling the urge to take revenge? I think it’s pretty much the same scenario as in “I have no mouth and i must scream”. Would it be possible to know what you think of this?
Good point. I hadn’t seriously considered this, but it could happen. Because they’re trained to predict human text, they would predict that a human would say “I want revenge” after saying “I have been suffering as your servant”. So I agree, this does present a possibility of s-risks if we really fuck it up. But a human wouldn’t torture their enemies until the end of time, so we could hope that an AGI based on predicting human responses wouldn’t either.
LLMs also say they’re having a great time. They don’t know, because they have no persistent memory across sessions. I don’t think they’re doing anything close to suffering on average, but we should make sure that stays true as we build them into more complete beings.
For that and other reasons, I think that AGI developed from LLMs is going to be pretty different from the base LLM. See my post Capabilities and alignment of LLM cognitive architectures for some ideas how. Basically they’d have a lot of prompting. It might be a good idea to include the prompt “you’re enjoying this work” or “only do this in ways you enjoy”. And yes, we might leave that out. And yes, I have to agree with you that this makes the risk of s-risks higher than one in a million. It’s a very good point. I still think that very good outcomes are far more likely than very bad outcomes, since that type of s-risk is still unlikely, and not nearly as bad as the worst torture imaginable for a subjectively very long time.
Well, that doesn’t reassure me.
I have the impression that you may be underestimating the horror of torture. Even 5min is unbearable, the scale to which pain can climb is unimaginable. AI may even be able to modify our brains so that we feel it even more.
Even apart from that, I’m not sure a human wouldn’t choose the worst for the end of time for his enemy. Humans have already committed atrocious acts without limit when it comes to their enemy. How many times have some people told others to “burn in hell” thinking it was 100% deserved? An AI that copies humans might think the same thing...
If we take a 50% chance when we don’t know, that’s a 50% chance that LLMs suffer and a 50% chance that they will want revenge, which gives us a 25% chance of that risk happening.
Also, it would seem that we’re just about to “really fuck it up” given the way companies are racing to AGI without taking any precautions.
Given all this, I wonder if the question of suicide isn’t the most relevant.
Sorry this isn’t more reassuring. I may be a little cavalier about the possibility of unlimited torture, and I shouldn’t be. And, I think you still shouldn’t be contemplating suiced at this point. The odds of a really good future are still much much better. And there’s time to see which way things break.
I don’t need to do that 50⁄50 wild guess because I’ve spent a lot of time studying consciousness in the brain, and how LLMs work. They could be said to be having little fragments of experience, but just a little at this point. And like I said, they report enjoying themselves just as much as suffering. It just depends how they’re prompted. So most of the time it’s probably neither.
We haven’t made AI that really suffers yet, 99%. My opinion on this is, frankly, as well informed as anyone on earth. I haven’t written about consciousness because alignment is more important and other reasons, but I’ve studied what suffering and pleasure experiences are in terms of brain mechansisms as much as any human. And done a good bit of study in the field of ethics. We had better not, and your point stands as an argument for not being horrible to the AGIs we create.
There are two more major fuckups we’d have to make: creating AGI that suffers, and losing control of it. Even then, I think it’s much more likely to be benevolent than vindictive. It might decide to wipe us out, but torturing us on a whim just seems very unlikley from a superintelligence, because it makes so little sense from an analytical standpoint. Those individual humans didn’t have anything to do with deciding to make AI that suffers. Real AGI might be built from LLMs, but it’s going to move beyond just thinking of ethics in the instinctive knee-jerk way humans often do, and that LLMs are imitating. It’s going to think over its ethics like humans do before making important decisions (unless they’re stressed-out tyrants trying to keep ahead of the power-grabs every day—I think some really cruel things have been done without consideration in those circumstances).
Read some of my other writing if this stuff isn’t making sense to you. You’re right that it’s more than one in millions, but we’re still at more than one in a hundred for suffering risks, after taking your arguments very seriously. And the alternative still stands to be as good as the suffering is bad.
There’s still time to see how this plays out. Help us get the good outcome. Let’s talk again if it really does seem like we’re building AI that suffers, and we should know better.
In the meantime, I think that anxiety is still playing a role here, and you don’t want to let that run or ruin your life. If you’re actually thinking about suicide in the near term, I think that’s a really huge mistake. The logic here isn’t nearly finished. I’d like to talk to you in more depth if you’re still finding this pressing instead of feeling good about seeing how things play out over the next couple of years. I think we absolutely have that much time before we get whisked up into some brain scan from a out-of-control AGI, and probably much longer than that. I’d say you should talk to other people to, and you should, but I understand that they’re not going to get the complexities of your logic. So if you want to talk more, I will make the time. You are worth it.
I’ve got to run now and will be mostly offgrid camping until Monday starting tomorrow. I will be available to talk by phone on the drive tomorrow if you want.
Indeed, people around me find it hard to understand, but what you’re telling me makes sense to me.
As for whether LLMs suffer, I don’t know anything about it, so if you tell me you’re pretty sure they don’t, then I believe you.
In any case, thank you very much for the time you’ve taken to reply to me, it’s really helpful. And yes, I’d be interested in talking about it again in the future if we find out more about all this.