there are probly many challenges one can face if they want that. i’m just fine getting my challenges from little things like video games, at least for a while. maybe i’d get back into the challenge of designing my own video games, too; i enjoyed that one.
I can empathise with the feeling, but I think it stems from the notion that I (used to) find challenges that I set for myself “artificial” in some way, so I can’t be happy unless something or somebody else creates it for me. I don’t like this attitude, as it seems like my brain is infantilising me. I don’t want to depend on irreducible ignorance to be satisfied. I like being responsible for myself. I’m trying to capture something vague by using vague words, so there are likely many ways to misunderstand me here.
Another point is just that our brains fundamentally learn from reward prediction-errors, and this is likely to have generalised into all sorts of broad heuristics we use in episodic future thinking—which I speculate plays a central role in integrating/propagating new proto-values (aka ‘moral philosophy’).
It sounds very unchallenging.
Perhaps boring. Pointless.
(But then most utopias sound that way to me.)
there are probly many challenges one can face if they want that. i’m just fine getting my challenges from little things like video games, at least for a while. maybe i’d get back into the challenge of designing my own video games, too; i enjoyed that one.
“Utopias are all alike; every dystopia is horrible in its own way.”—AI Karenbot
I can empathise with the feeling, but I think it stems from the notion that I (used to) find challenges that I set for myself “artificial” in some way, so I can’t be happy unless something or somebody else creates it for me. I don’t like this attitude, as it seems like my brain is infantilising me. I don’t want to depend on irreducible ignorance to be satisfied. I like being responsible for myself. I’m trying to capture something vague by using vague words, so there are likely many ways to misunderstand me here.
Another point is just that our brains fundamentally learn from reward prediction-errors, and this is likely to have generalised into all sorts of broad heuristics we use in episodic future thinking—which I speculate plays a central role in integrating/propagating new proto-values (aka ‘moral philosophy’).