I have a friend who spent years working on existential risk. Over time his perception of the risks increased, while his perception of what he could do about them decreased (and the latter was more important). Eventually he dropped out of work in a normal sense to play video games, because the enjoyment was worth more to him than what he could hope to accomplish with regular work. He still does occasional short term projects, when they seem especially useful or enjoyable, but his focus is on generating hedons in the time he has left.
I love this friend as a counter-example to most of the loudest voices on AI risk.You can think p(doom) is very high and have that be all the more reason to play video games.
I don’t want to valorize this too much because I don’t want retiring to play video games becoming the cool new thing. The admirable part is that he did his own math and came to his own conclusions in the face of a lot of social pressure to do otherwise.
I know people like this. I really don’t understand people like this. Why not just take the challenge to play real live it’s a videogame with crushing difficulty. Oh wait that’s maybe just me who plays games on very hard difficulty most of the time (in the past when I did play video games). I guess there is probably not one reason people do this. But I don’t get the reason why you are being crushed by doom. At least for me using the heuristic of just not giving up, never (at least not consciously, I probably can’t muster a lot of will as I am being disassembled by nanobots, because of all the pain you know), seemed to work really well. I just ended up reasoning myself into a stable state, by enduring long enough. I wonder if the same would have happened for your fried had he endured longer.
I am not quite sure what the correct answer is for playing Minecraft (let’s ignore the Ender Dragon, which did not exist when I played it).
I think there is a correct answer for what to do to prevent AI doom. Namely to take actions that achieve high expected value in your world model. If you care a lot about the universe then this translates to “take actions that achieve high expected value on the goal of preventing doom.”
So this only works if you really care about the universe. Maybe I care an unusual amount about the universe. If there was a button I could press that would kill me, but that would save the universe, then I would press it. At least in the current world, we are in. Sadly it isn’t that easy. If you don’t care about the universe sufficiently compared to your own well-being, the expected value from playing video games would actually be higher, and playing video games would be the right answer.
I think this perspective of “if I can’t affect p(doom) enough, let me generate hedons instead” makes a lot of sense. But as someone who has spent way way way more time than his fair share on video games (and who still spends a lot of time on them), I want to make the somewhat nitpicky point that video games are not necessarily the hedon-optimizing option.
Here’s an alternative frame, and one into which I also fall from time to time: Suppose that, for whatever reason (be it due to x-risk; notoriously poor feedback loops in AI alignment research; or, in my case, past bouts of depression or illness), the fate of the world / your future / your health / your project / your day seems hard to affect and thus outside of your control (external locus of control). Then video games counteract that by giving you control (internal locus of control). Maybe I can’t affect <project>, but I can complete quests or puzzles in games. Games are designed to allow for continuous progress, after all.
Or as Dr. K of HealthyGamer puts it, video games “short-circuit the reward circuit” (paraphrased). Roughly, the brain rewards us for doing stuff by generating feelings of accomplishment or triumph. But doing stuff in the real world is hard, and in video games it’s easy. So why do the former? In this sense, video games are a low-level form of wireheading.
Also, excessive gaming can result in anhedonia, which seems like a problem for the goal of maximizing hedons.
To tie this pack to the start: if the goal is to maximize hedons, activities other than gaming may be much better for this purpose (<-> goal factoring). If the goal is instead to (re)gain a sense of control, then video games seem more optimized for that.
For a lot of people, especially people that aren’t psychologically stable, this is very, very good advice in general around existential risk.
To be clear, I think that he has an overly pessimistic worldview on existential risk, but I genuinely respect your friend realizing that his capabilities weren’t enough to tackle it productively, and that he realized that he couldn’t be helpful enough to do good work on existential risk, so he backed away from the field as he realized his own limitations.
While I definitely should have been more polite in expressing those ideas, I do think that they’re important to convey, especially the first one, as I really, really don’t people to burn themselves out or get anxiety/depression from doing something that they don’t want to do, or even like doing.
I definitely will be nicer about expressing those ideas, but they’re so important that I do think something like the insights need to be told to a lot of people, especially those in the alignment community.
I have a friend who spent years working on existential risk. Over time his perception of the risks increased, while his perception of what he could do about them decreased (and the latter was more important). Eventually he dropped out of work in a normal sense to play video games, because the enjoyment was worth more to him than what he could hope to accomplish with regular work. He still does occasional short term projects, when they seem especially useful or enjoyable, but his focus is on generating hedons in the time he has left.
I love this friend as a counter-example to most of the loudest voices on AI risk.You can think p(doom) is very high and have that be all the more reason to play video games.
I don’t want to valorize this too much because I don’t want retiring to play video games becoming the cool new thing. The admirable part is that he did his own math and came to his own conclusions in the face of a lot of social pressure to do otherwise.
I know people like this. I really don’t understand people like this. Why not just take the challenge to play real live it’s a videogame with crushing difficulty. Oh wait that’s maybe just me who plays games on very hard difficulty most of the time (in the past when I did play video games). I guess there is probably not one reason people do this. But I don’t get the reason why you are being crushed by doom. At least for me using the heuristic of just not giving up, never (at least not consciously, I probably can’t muster a lot of will as I am being disassembled by nanobots, because of all the pain you know), seemed to work really well. I just ended up reasoning myself into a stable state, by enduring long enough. I wonder if the same would have happened for your fried had he endured longer.
Because gamification is for things with a known correct answer. Solving genuine unknowns requires a stronger connection with truth.
I am not quite sure what the correct answer is for playing Minecraft (let’s ignore the Ender Dragon, which did not exist when I played it).
I think there is a correct answer for what to do to prevent AI doom. Namely to take actions that achieve high expected value in your world model. If you care a lot about the universe then this translates to “take actions that achieve high expected value on the goal of preventing doom.”
So this only works if you really care about the universe. Maybe I care an unusual amount about the universe. If there was a button I could press that would kill me, but that would save the universe, then I would press it. At least in the current world, we are in. Sadly it isn’t that easy. If you don’t care about the universe sufficiently compared to your own well-being, the expected value from playing video games would actually be higher, and playing video games would be the right answer.
I think this perspective of “if I can’t affect p(doom) enough, let me generate hedons instead” makes a lot of sense. But as someone who has spent way way way more time than his fair share on video games (and who still spends a lot of time on them), I want to make the somewhat nitpicky point that video games are not necessarily the hedon-optimizing option.
Here’s an alternative frame, and one into which I also fall from time to time: Suppose that, for whatever reason (be it due to x-risk; notoriously poor feedback loops in AI alignment research; or, in my case, past bouts of depression or illness), the fate of the world / your future / your health / your project / your day seems hard to affect and thus outside of your control (external locus of control). Then video games counteract that by giving you control (internal locus of control). Maybe I can’t affect <project>, but I can complete quests or puzzles in games. Games are designed to allow for continuous progress, after all.
Or as Dr. K of HealthyGamer puts it, video games “short-circuit the reward circuit” (paraphrased). Roughly, the brain rewards us for doing stuff by generating feelings of accomplishment or triumph. But doing stuff in the real world is hard, and in video games it’s easy. So why do the former? In this sense, video games are a low-level form of wireheading.
Also, excessive gaming can result in anhedonia, which seems like a problem for the goal of maximizing hedons.
To tie this pack to the start: if the goal is to maximize hedons, activities other than gaming may be much better for this purpose (<-> goal factoring). If the goal is instead to (re)gain a sense of control, then video games seem more optimized for that.
For a lot of people, especially people that aren’t psychologically stable, this is very, very good advice in general around existential risk.
To be clear, I think that he has an overly pessimistic worldview on existential risk, but I genuinely respect your friend realizing that his capabilities weren’t enough to tackle it productively, and that he realized that he couldn’t be helpful enough to do good work on existential risk, so he backed away from the field as he realized his own limitations.
man these seem like really unnecessarily judgemental ways to make this point
While I definitely should have been more polite in expressing those ideas, I do think that they’re important to convey, especially the first one, as I really, really don’t people to burn themselves out or get anxiety/depression from doing something that they don’t want to do, or even like doing.
I definitely will be nicer about expressing those ideas, but they’re so important that I do think something like the insights need to be told to a lot of people, especially those in the alignment community.