Your colleague was sending you their fiction. You respected your colleague, but didn’t like the writing. Your colleague passed away. Would you burn all of their writings?
If you wouldn’t, it means counterfactual reward (/counterfactual value of their writings) affects you strong enough.
Your friend liked to listen to your songs (a). You didn’t play them too often (too much of a good thing). Your friend didn’t like to bother other people (b). Your friend passed away. Would you blast your songs through the whole town until everyone falls off their chairs 24/7?
If you would, it means that you’re ready to milk counterfactual reward (a) while not caring about the counterfactual reward (b).
All of humanity is dead. You’re the last survivor. You’re potentially immortal, but can’t create new life. You aren’t happy. Would you cling to your life? For how long?
Your answer determines how strong counterfactual value of life (if people were still alive) affects you now. If counterfactual value is strong, you can only keep on living.
You want your desires to be satisfied (e.g. “communication with other people”). Even in the future, when your desires change. But do you want it in the future where you’re turned into a zombie? All zombie wants is to play in the dirt all day.
If “no”, that means the value of your desires can be updated only to a certain counterfactual degree. You can’t go from a desire with great value “I want to communicate with others” to the desire with almost zero counterfactual value “I want to play in the dirt all day”.
Rationality misses something?
You can “objectively” define anything in terms of relations to other things.
There’s a simple process of describing a thing in terms of relations to other things.
Bayesian inference is about updating your belief in terms of relations to your other beliefs. Maybe the real truth is infinitely complex, but you can update towards it.
This “process” is about updating your description of a thing in terms of relations to other things. Maybe the real description is infinitely complex, but you can update towards it.
(One possible contrast: Bayesian inference starts with a belief spread across all possible worlds and tries to locate a specific world. My idea starts with a thing in a specific world and tries to imagine equivalents of this thing in all possible worlds.)
Bayesian process is described by Bayes’ theorem. My “process” isn’t described yet.
My idea was inspired by a weird/esoteric topic. I was amazed by differences of people and surreal paintings, videogame levels. For example, each painting felt completely unique, but connected to all other paintings.
My most specific ideas are about that strange topic.
There are places (3D/2D shape).
There are orders of places. An “order” for a place is like a context for a concept.
In an order a place has “granularity”. “Granularity” is like a texture (take a look at some textures and you’ll know what it means). It’s how you split a place into pieces. It affects on what “level” you look at the place. It affects what patterns you notice in a place. It affects to what parts you pay more attention.
When you add some minor rules, there appear consistent and inconsistent ways to distribute “granularity” between the places you compare. With some minor rules “granularity” lets you describe one place in terms of the other places. You assign each place a specific “granularity”, but all those granularities depend on each other.
In Bayesian inference you try to consistently assign probabilities to events. With the goal to describe outcomes in terms of each other. Here you try to consistently assign “granularity” to concepts. With the goal to describe the concepts in terms of each other.
I have a post with example: “Colors” of places. There you can find an example of what are the “rules” of granularity distribution may be. But I’m not a math person to put numbers on it/turn it into a more specific model.
I think “granularity” (or something similar) is related to other human concepts and experiences too. I think this is a key concept/a needed concept. It’s needed to describe qualitative differences, qualitative transitions between things. Bayesian inference and utilitarian moral theories describe only qualitative differences. And sometimes it may lead to strange results (like “torture vs. dust specks” thought experiment or “Pascal’s mugging” or even “Doomsday argument” maybe), because those theories can’t take any context into account. If we want to describe a new way of analyzing reality, we need to describe something a little bit different, I guess.
Simple preferences
A way to describe some preferences and decisions.
Your colleague was sending you their fiction. You respected your colleague, but didn’t like the writing. Your colleague passed away. Would you burn all of their writings?
If you wouldn’t, it means counterfactual reward (/counterfactual value of their writings) affects you strong enough.
Your friend liked to listen to your songs (a). You didn’t play them too often (too much of a good thing). Your friend didn’t like to bother other people (b). Your friend passed away. Would you blast your songs through the whole town until everyone falls off their chairs 24/7?
If you would, it means that you’re ready to milk counterfactual reward (a) while not caring about the counterfactual reward (b).
All of humanity is dead. You’re the last survivor. You’re potentially immortal, but can’t create new life. You aren’t happy. Would you cling to your life? For how long?
Your answer determines how strong counterfactual value of life (if people were still alive) affects you now. If counterfactual value is strong, you can only keep on living.
You want your desires to be satisfied (e.g. “communication with other people”). Even in the future, when your desires change. But do you want it in the future where you’re turned into a zombie? All zombie wants is to play in the dirt all day.
If “no”, that means the value of your desires can be updated only to a certain counterfactual degree. You can’t go from a desire with great value “I want to communicate with others” to the desire with almost zero counterfactual value “I want to play in the dirt all day”.
Rationality misses something?
You can “objectively” define anything in terms of relations to other things.
There’s a simple process of describing a thing in terms of relations to other things.
Bayesian inference is about updating your belief in terms of relations to your other beliefs. Maybe the real truth is infinitely complex, but you can update towards it.
This “process” is about updating your description of a thing in terms of relations to other things. Maybe the real description is infinitely complex, but you can update towards it.
(One possible contrast: Bayesian inference starts with a belief spread across all possible worlds and tries to locate a specific world. My idea starts with a thing in a specific world and tries to imagine equivalents of this thing in all possible worlds.)
Bayesian process is described by Bayes’ theorem. My “process” isn’t described yet.
My idea was inspired by a weird/esoteric topic. I was amazed by differences of people and surreal paintings, videogame levels. For example, each painting felt completely unique, but connected to all other paintings.
My most specific ideas are about that strange topic.
There are places (3D/2D shape).
There are orders of places. An “order” for a place is like a context for a concept.
In an order a place has “granularity”. “Granularity” is like a texture (take a look at some textures and you’ll know what it means). It’s how you split a place into pieces. It affects on what “level” you look at the place. It affects what patterns you notice in a place. It affects to what parts you pay more attention.
When you add some minor rules, there appear consistent and inconsistent ways to distribute “granularity” between the places you compare. With some minor rules “granularity” lets you describe one place in terms of the other places. You assign each place a specific “granularity”, but all those granularities depend on each other.
In Bayesian inference you try to consistently assign probabilities to events. With the goal to describe outcomes in terms of each other. Here you try to consistently assign “granularity” to concepts. With the goal to describe the concepts in terms of each other.
I have a post with example: “Colors” of places. There you can find an example of what are the “rules” of granularity distribution may be. But I’m not a math person to put numbers on it/turn it into a more specific model.
I think “granularity” (or something similar) is related to other human concepts and experiences too. I think this is a key concept/a needed concept. It’s needed to describe qualitative differences, qualitative transitions between things. Bayesian inference and utilitarian moral theories describe only qualitative differences. And sometimes it may lead to strange results (like “torture vs. dust specks” thought experiment or “Pascal’s mugging” or even “Doomsday argument” maybe), because those theories can’t take any context into account. If we want to describe a new way of analyzing reality, we need to describe something a little bit different, I guess.