It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.
Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They’d done it! Against all the odds, the singularity had gone well. They’d defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.
Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.
She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn’t make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.
blip
Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn’t occurred yet, she could still fix it and make the most meaningful contribution to the human race’s history by stopping death, suffering and pain.
As she went about her day’s business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara’s utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.
She should probably continue to try and save humanity, because of indexical uncertainty.
Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...
Another thought gripped her, what if she couldn’t solve the meaningfulness problem from her nightmare? She would be trapped in a loop.
blip
A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.
Serves her right for making self-improvement a foremost terminal value even when she knows that’s going to be rendered irrelevant, meanwhile the loop I’m stuck in is of the first six hours spent in my catgirl volcano lair.
By believing it’s important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but “She had long been a believer in self-perfection and self-improvement” sounds like something one decides to care about.
Self-improvement wasn’t her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.
I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn’t want to have to try to write highly optimised bliss, one of the two).
That’s the reason she liked those things in the past, but “acheiving her goals” is redundant, she should have known years in advance about that, so it’s clear that she’s grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?
Hedonism isn’t bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.
I don’t want to be upgraded into a “capable agent” and then cast back into the wilderness from whence I came, I’d settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer’s Lower Bound.
A short story—titled “The end of meaning”
It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.
Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They’d done it! Against all the odds, the singularity had gone well. They’d defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.
Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.
She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn’t make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.
blip
Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn’t occurred yet, she could still fix it and make the most meaningful contribution to the human race’s history by stopping death, suffering and pain.
As she went about her day’s business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara’s utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.
She should probably continue to try and save humanity, because of indexical uncertainty.
Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...
Another thought gripped her, what if she couldn’t solve the meaningfulness problem from her nightmare? She would be trapped in a loop.
blip
A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.
Serves her right for making self-improvement a foremost terminal value even when she knows that’s going to be rendered irrelevant, meanwhile the loop I’m stuck in is of the first six hours spent in my catgirl volcano lair.
Is it possible to make something a terminal value? If so, how?
By believing it’s important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but “She had long been a believer in self-perfection and self-improvement” sounds like something one decides to care about.
Self-improvement wasn’t her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.
I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn’t want to have to try to write highly optimised bliss, one of the two).
That’s the reason she liked those things in the past, but “acheiving her goals” is redundant, she should have known years in advance about that, so it’s clear that she’s grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?
Hedonism isn’t bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.
I don’t want to be upgraded into a “capable agent” and then cast back into the wilderness from whence I came, I’d settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer’s Lower Bound.