I want to discuss a particular failure mode of communication and thinking in general. I think it affects our thinking about AI Alignment too.
Communication. A person has a vague, but useful idea (P). This idea is applicable on one level of the problem. It sounds similar to another idea (T), applicable on a very different level of the problem. Because of the similarity nobody can understand the difference between (P) and (T). People end up overestimating the vagueness of (P) and not considering it. Because people aren’t used to mapping ideas to “levels” of a problem. Information that has to give more clarity (P is similar to T) ends up creating more confusion. I think this is irrational, it’s a failure of dealing with information.
Thinking in general. A person has a specific idea (T) applicable on one level of a problem. The person doesn’t try to apply a version of this idea on a different level. Because (1) she isn’t used to it (2) she considers only very specific ideas, but she can’t come up with a specific idea for other levels. I think this is irrational: rationalists shouldn’t shy away from vague ideas and evidence. It’s a predictable way to lose.
A comical example of this effect:
A: I got an idea. We should cook our food in the oven. Using the oven itself. I haven’t figured out all the details yet, but...
B: We already do this. We put the food in the oven. Then we explode the oven. You can’t get more “itself” than this.
A: I have something else on my mind. Maybe we should touch the oven in multiple places or something. It may turn it on.
B: I don’t want to blow up with the oven!
A: We shouldn’t explode the oven at all.
B: But how does the food get cooked?
A: I don’t know the exact way it happens… but I guess it gets heated.
B: Heated but not exploded? Sounds like a distinction without a difference. Come back when you have a more specific idea.
A: But we have only 2 ovens left, we can’t keep exploding them! We have to try something else!
B can’t understand A, because B thinks about the problem on the level of “chemical reactions”. On that level it doesn’t matter what heats the food, so it’s hard to tell the difference between exploding the oven and using the oven in other ways.
Bad news is that “taboo technique” (replacing a concept with its components: “unpacking” a concept) may fail to help. Because A doesn’t know the exact way to turn on the oven or the exact way the oven heats the food. Her idea is very useful if you try it, but it doesn’t come with a set of specific steps.
And the worst thing is that A may not be there in the first place. There may be no one around to even bother you to try to use your oven differently.
I think rationality doesn’t have a general cure for this, but this may actually be one of the most important problems of human reasoning. I think the entire human knowledge is diseased with this. Our knowledge is worse than swiss cheese and we don’t even try to fill the gaps.
Any good idea that was misunderstood and forgotten—was forgotten because of this. Any good argument that was ignored and ridiculed—was ignored because of this. It all got lost in the gaps.
Metrics
I think one method to resolve misunderstanding is to add some metrics for comparing ideas. Then talk about something akin to probability distributions over those metrics. A could say:
“”Instruments have parts with different functions. Those functions are not the same, even though they may intersect and be formulated in terms of each other:
Some parts create the effect of the instrument. E.g. the head of a hammer when it smashes a nail.
Some parts control the effect of the instrument. E.g. the handle of a hammer when a human aims it at a nail.
In practice, some parts of the instrument realize both functions. E.g. the handle of a hammer actually allows you not only to control the hammer, but also to speed up the hammer more effectively.
When we blow up the oven, we use 99% of the first function of the oven. But I believe we can use 80% of the second function and 20% of the first.”″
Complicated Ideas
Let’s explore some ideas to learn to attach ideas to “levels” of a problem and seek “gaps”. “(gap)” means that the author didn’t consider/didn’t write about that idea.
Two of those ideas are from math. Maybe I shouldn’t have used them as examples, but I wanted to give diverse examples.
(1)“Expected Creative Surprises” by Eliezer Yudkowsky. There are two types of predictability:
Predictability of a process.
Predictability of its final outcomes.
Sometimes they’re the same thing. But sometimes you have:
An unpredictable process with predictable final outcomes. E.g. when you play chess against a computer: you don’t know what the computer will do to you, but you know that you will lose.
(gap) A predictable process with unpredictable final outcomes. E.g. if you don’t have enough memory to remember all past actions of the predictable process. But the final outcome is created by those past actions.
(2)“Belief in Belief” by Eliezer Yudkowsky. Beliefs exist on three levels:
Verbal level.
First “muscle memory” level. Your anticipations of direct experiences.
Second “muscle memory” level. Your reactions to your own beliefs.
Sometimes a belief exists on all those levels and contents of the belief are the same on all levels. But sometimes you get more interesting types of beliefs, for example:
A person says that “the sky is green”. But the person behaves as if the sky is blue. But the person instinctively defends the belief “the sky is green”.
Not verbally formulated “muscle memory” belief. Some intuition you didn’t think to describe or can’t describe.
(gap) Slowly forming “muscle memory” belief created by your muscle reactions to other beliefs. Some intuition/preference that only started to form, but for now exists mainly as a reaction to other intuitions and preferences.
(3)“The Real Butterfly Effect”, explained by Sabine Hossenfelder. There’re two ways in which consequences of an event spread:
A small event affects more and more things with time.
Event on a small scale affects larger and larger scale events.
In a way it’s kind of the same thing. But in a way it’s not:
One Butterfly Effect means sensitivity to small events (butterflies).
Another Butterfly Effect says that there’s an infinity of smaller and smaller events (butterflies). And even if you account for them all you have a time limit for prediction.
Illusions which are complete lies that don’t correspond to anything real. E.g. a mirage in a desert.
Illusions that simplify complicated reality. E.g. when you close a program by clicking on it with the arrow: the arrow didn’t really stop the program (even though it kind of did), it’s a drastic simplification of what actually happened (rapid execution of thousands lines of code).
Conscious experience is an illusion of the second type, Dennett says. I don’t agree, but I like the idea and think it’s very important.
Somewhat similar to Fictionalism: there are lies and there are “truths of fiction”, “useful lies”. Mathematical facts may be the same type of facts as “Macbeth is insane/Macbeth dies”.
Second function focuses on describing properties of objects.
Different languages can have different focus on those functions:
Many human languages focus on both functions equally (fifty-fifty).
Fictional languages of Borges focus 100% on properties. Objects don’t exist/there’s way too much particular objects.
(gap) Synesthesia-like “languages”. They focus 80% on properties and 20% on objects.
I think there’s an important gap in Borges’s ideas: Borges doesn’t consider a language with extremely strong, but not absolute emphasis on the second function. Borges criticizes his languages, but doesn’t steelman them.
Pierre Menard wants to copy 1% of the 1 and 98% of the 2 and 1% of the 3: Pierre Menard wants to imagine exactly the same text but with completely different thoughts behind the words.
(“gap”) Pierre Menard also could try to go for 100% of 3 and for “anti 99%” of 4: try to write a completely new text by experiencing the same thoughts and urges that created the old one.
You can use the same thinking to analyze/classify puzzles.
Inspired by Pirates of the Caribbean: Dead Man’s Chest. Jack has a compass that can lead him to a thing he desires. Jack wants to find a key. Jack can have those experiences:
Experience of the real key.
Experience of a drawing of the key.
Pure desire for the key.
In order for compass to work Jack may need (almost) any mix of those: for example, maybe pure desire is enough for the compass to work. But maybe you need to mix pure desire with seeing at least a drawing of the key (so you have more of a picture of what you want).
Gibbs:And whatever this key unlocks, inside there’s something valuable. So, we’re setting out to find whatever this key unlocks!
Jack:No! If we don’t have the key, we can’t open whatever it is we don’t have that it unlocks. So what purpose would be served in finding whatever need be unlocked, which we don’t have, without first having found the key what unlocks it?
Gibbs:So—We’re going after this key!
Jack:You’re not making any sense at all.
Gibbs:???
Jack has those possibilities:
To go after the chest. Foolish: you can’t open the chest.
To go after the key. Foolish: you can get caught by Davy Jones.
Gibbs thinks about doing 100% of 1 or 100% of 2 and gets confused when he learns that’s not the plan. Jack thinks about 50% of 1 and 50% of 2: you can go after the chest in order to use it to get the key. Or you can go after the chest and the key “simultaneously” in order to keep Davy Jones distracted and torn between two things.
Now you need 50% of 1 and 25% of 2: you need to rewind time while the platform moves. In this time-manipulating world outcomes may not add up to 100% since you can erase or multiply some of the outcomes/move outcomes from one timeline to another.
Argumentation
You can use the same thing to analyze arguments and opinions. Our opinions are built upon thousands and thousands “false dilemmas” that we haven’t carefully revised.
For example, take a look at those contradicting opinions:
Humans are smart. Sometimes in very non-obvious ways.
Humans are stupid. They make a lot of mistakes.
Usually people think you have to believe either “100% for 1” or “100% for 2″. But you can believe in all kinds of mixes.
For example, I believe in 90% of 1 and 10% of 2: people may be “stupid” in this particular nonsensical world, but in a better world everyone would be a genius.
Ideas as bits
You can treat an idea as a “(quasi)probability distribution” over some levels of a problem/topic. Each detail of the idea gives you a hint about the shape of the distribution. (Each detail is a bit of information.)
We usually don’t analyze information like this. Instead of cautiously updating our understanding with every detail of an idea we do this:
try to grab all details together
get confused (like Gibbs)
throw most of the details out and end up with an obviously wrong understanding.
Note: maybe you can apply the same idea about “bits” to chess (and other games). Each idea and each small advantage you need to come with the winning plan is a “bit” of information/advantage. Before you get enough information/advantage bits the positions looks like a cloud where you don’t see what to do.
Richness of ideas
I think you can measure “richness” of theories (and opinions and anything else) using the same quasiprobabilities/bits. But this measure depends on what you want.
Compare those 2 theories explaining different properties of objects:
(A) Objects have different properties because they have different combinations of “proto properties”.
(B) Objects have different properties because they have different organization of atoms.
Let’s add a metric to compare 2 theories:
Does the theory explain why objects exist in the first place?
Does the theory explain why objects have certain properties?
Let’s say we’re interested in physical objects. B-theory explains properties through 90% of 1 and 10% of 2: it makes properties of objects equivalent to the reason of their existence. A-theory explains properties through 100% of 2. B-theory is more fundamental, because it touches more on a more fundamental topic (existence).
But if we’re interested in mental objects… B-theory explains only 10% of 2 and 0% of 1. And A-theory may be explaining 99% of 1. If our interests are different A-theory turns out to be more fundamental.
When you look for a theory (or opinion or anything else), you can treat any desire and argument as a “bit” that updates the quasiprobabilities like the ones above.
Discussion
We could help each other to find gaps in our thinking! We could do this in this thread.
Gaps of Alignment
I want to explain what I perceive as missed ideas in Alignment. And discuss some other ideas.
(1) You can split possible effects of AI’s actions into three domains. All of them are different (with different ideas), even though they partially intersect and can be formulated in terms of each other. Traditionally we focus on the first two domains:
(Not) accomplishing a goal. “Utility functions” are about this.
(Not) violating human values. “Value learning” is about this.
(Not) modifying a system without breaking it. (Not) doing a task in an obviously meaningless way. “Impact measures” are about this.
I think third domain is mostly ignored and it’s a big blind spot.
I believe that “human (meta-)ethics” is just a subset of a way broader topic: “properties of (any) systems”. And we can translate the method of learning properties of simple systems into a method of learning human values (a complicated system). And we can translate results of learning those simple systems into human moral rules. And many important complicated properties (such as “corrigibility”) has analogies in simple systems.
(2) Another “missed idea”:
Some people analyze human values as a random thing (random utility function).
Some people analyze human values as a result of evolution.
Some analyze human values as a result of people’s childhoods.
Not a lot of people analyze human values as… a result of the way humans experience the world.
“True Love(TM) towards a sentient being” feels fundamentally different from “eating a sandwich”, so it could be evidence that human experiences have an internal structure and that structure plays a big role in determining values. But not a lot of models (or simply 0) take this “fact” into account. Not surprisingly, though: it would require a theory of human subjective experience. But still, can we just ignore this “fact”?
There’s the idea of biological or artificial neurons.
(gap)
There’s the idea that communication between humans is like communication between neurons.
I think one layer of the idea is missing: you could say that concepts in the human mind are somewhat like neurons. Maybe human thinking is like a fractal, looks the same on all levels.
You can describe possible outcomes (microscopic things) in terms of each other. Using Bayes’ rule.
I think this idea should have a “counterpart”: maybe you can describe macroscopic things in terms of each other. And not only outcomes. Using something somewhat similar to probabilistic reasoning, to Bayes’ rule.
I want to discuss a particular failure mode of communication and thinking in general. I think it affects our thinking about AI Alignment too.
Communication. A person has a vague, but useful idea (P). This idea is applicable on one level of the problem. It sounds similar to another idea (T), applicable on a very different level of the problem. Because of the similarity nobody can understand the difference between (P) and (T). People end up overestimating the vagueness of (P) and not considering it. Because people aren’t used to mapping ideas to “levels” of a problem. Information that has to give more clarity (P is similar to T) ends up creating more confusion. I think this is irrational, it’s a failure of dealing with information.
Thinking in general. A person has a specific idea (T) applicable on one level of a problem. The person doesn’t try to apply a version of this idea on a different level. Because (1) she isn’t used to it (2) she considers only very specific ideas, but she can’t come up with a specific idea for other levels. I think this is irrational: rationalists shouldn’t shy away from vague ideas and evidence. It’s a predictable way to lose.
A comical example of this effect:
A: I got an idea. We should cook our food in the oven. Using the oven itself. I haven’t figured out all the details yet, but...
B: We already do this. We put the food in the oven. Then we explode the oven. You can’t get more “itself” than this.
A: I have something else on my mind. Maybe we should touch the oven in multiple places or something. It may turn it on.
B: I don’t want to blow up with the oven!
A: We shouldn’t explode the oven at all.
B: But how does the food get cooked?
A: I don’t know the exact way it happens… but I guess it gets heated.
B: Heated but not exploded? Sounds like a distinction without a difference. Come back when you have a more specific idea.
A: But we have only 2 ovens left, we can’t keep exploding them! We have to try something else!
B can’t understand A, because B thinks about the problem on the level of “chemical reactions”. On that level it doesn’t matter what heats the food, so it’s hard to tell the difference between exploding the oven and using the oven in other ways.
Bad news is that “taboo technique” (replacing a concept with its components: “unpacking” a concept) may fail to help. Because A doesn’t know the exact way to turn on the oven or the exact way the oven heats the food. Her idea is very useful if you try it, but it doesn’t come with a set of specific steps.
And the worst thing is that A may not be there in the first place. There may be no one around to even bother you to try to use your oven differently.
I think rationality doesn’t have a general cure for this, but this may actually be one of the most important problems of human reasoning. I think the entire human knowledge is diseased with this. Our knowledge is worse than swiss cheese and we don’t even try to fill the gaps.
Any good idea that was misunderstood and forgotten—was forgotten because of this. Any good argument that was ignored and ridiculed—was ignored because of this. It all got lost in the gaps.
Metrics
I think one method to resolve misunderstanding is to add some metrics for comparing ideas. Then talk about something akin to probability distributions over those metrics. A could say:
“”Instruments have parts with different functions. Those functions are not the same, even though they may intersect and be formulated in terms of each other:
Some parts create the effect of the instrument. E.g. the head of a hammer when it smashes a nail.
Some parts control the effect of the instrument. E.g. the handle of a hammer when a human aims it at a nail.
In practice, some parts of the instrument realize both functions. E.g. the handle of a hammer actually allows you not only to control the hammer, but also to speed up the hammer more effectively.
When we blow up the oven, we use 99% of the first function of the oven. But I believe we can use 80% of the second function and 20% of the first.”″
Complicated Ideas
Let’s explore some ideas to learn to attach ideas to “levels” of a problem and seek “gaps”. “(gap)” means that the author didn’t consider/didn’t write about that idea.
Two of those ideas are from math. Maybe I shouldn’t have used them as examples, but I wanted to give diverse examples.
(1) “Expected Creative Surprises” by Eliezer Yudkowsky. There are two types of predictability:
Predictability of a process.
Predictability of its final outcomes.
Sometimes they’re the same thing. But sometimes you have:
An unpredictable process with predictable final outcomes. E.g. when you play chess against a computer: you don’t know what the computer will do to you, but you know that you will lose.
(gap) A predictable process with unpredictable final outcomes. E.g. if you don’t have enough memory to remember all past actions of the predictable process. But the final outcome is created by those past actions.
(2) “Belief in Belief” by Eliezer Yudkowsky. Beliefs exist on three levels:
Verbal level.
First “muscle memory” level. Your anticipations of direct experiences.
Second “muscle memory” level. Your reactions to your own beliefs.
Sometimes a belief exists on all those levels and contents of the belief are the same on all levels. But sometimes you get more interesting types of beliefs, for example:
A person says that “the sky is green”. But the person behaves as if the sky is blue. But the person instinctively defends the belief “the sky is green”.
Not verbally formulated “muscle memory” belief. Some intuition you didn’t think to describe or can’t describe.
(gap) Slowly forming “muscle memory” belief created by your muscle reactions to other beliefs. Some intuition/preference that only started to form, but for now exists mainly as a reaction to other intuitions and preferences.
(3) “The Real Butterfly Effect”, explained by Sabine Hossenfelder. There’re two ways in which consequences of an event spread:
A small event affects more and more things with time.
Event on a small scale affects larger and larger scale events.
In a way it’s kind of the same thing. But in a way it’s not:
One Butterfly Effect means sensitivity to small events (butterflies).
Another Butterfly Effect says that there’s an infinity of smaller and smaller events (butterflies). And even if you account for them all you have a time limit for prediction.
(4) “P=NP, relativisation, and multiple choice exams”, Baker-Gill-Solovay theorem explained by Terence Tao. There are two dodgy things:
Cheating.
Simulation of cheating.
Sometimes they are “the same thing”, sometimes they are not.
(5) “Free Will and consciousness experience are a special type of illusion.” An idea of Daniel Dennett. There are 2 types of illusions:
Illusions which are complete lies that don’t correspond to anything real. E.g. a mirage in a desert.
Illusions that simplify complicated reality. E.g. when you close a program by clicking on it with the arrow: the arrow didn’t really stop the program (even though it kind of did), it’s a drastic simplification of what actually happened (rapid execution of thousands lines of code).
Conscious experience is an illusion of the second type, Dennett says. I don’t agree, but I like the idea and think it’s very important.
Somewhat similar to Fictionalism: there are lies and there are “truths of fiction”, “useful lies”. Mathematical facts may be the same type of facts as “Macbeth is insane/Macbeth dies”.
(6) “Tlön, Uqbar, Orbis Tertius” by Jorge Luis Borges. A language has two functions:
First function focuses on describing objects.
Second function focuses on describing properties of objects.
Different languages can have different focus on those functions:
Many human languages focus on both functions equally (fifty-fifty).
Fictional languages of Borges focus 100% on properties. Objects don’t exist/there’s way too much particular objects.
(gap) Synesthesia-like “languages”. They focus 80% on properties and 20% on objects.
I think there’s an important gap in Borges’s ideas: Borges doesn’t consider a language with extremely strong, but not absolute emphasis on the second function. Borges criticizes his languages, but doesn’t steelman them.
(7) “Pierre Menard, Author of the Quixote” by Jorge Luis Borges. There are 3 ways to copy a text:
You can copy the text.
You can copy the action of writing the text.
You can copy the thoughts behind the text.
You can change the text. (“anti-option”)
Pierre Menard wants to copy 1% of the 1 and 98% of the 2 and 1% of the 3: Pierre Menard wants to imagine exactly the same text but with completely different thoughts behind the words.
(“gap”) Pierre Menard also could try to go for 100% of 3 and for “anti 99%” of 4: try to write a completely new text by experiencing the same thoughts and urges that created the old one.
Puzzles
You can use the same thinking to analyze/classify puzzles.
Inspired by Pirates of the Caribbean: Dead Man’s Chest. Jack has a compass that can lead him to a thing he desires. Jack wants to find a key. Jack can have those experiences:
Experience of the real key.
Experience of a drawing of the key.
Pure desire for the key.
In order for compass to work Jack may need (almost) any mix of those: for example, maybe pure desire is enough for the compass to work. But maybe you need to mix pure desire with seeing at least a drawing of the key (so you have more of a picture of what you want).
Gibbs: And whatever this key unlocks, inside there’s something valuable. So, we’re setting out to find whatever this key unlocks!
Jack: No! If we don’t have the key, we can’t open whatever it is we don’t have that it unlocks. So what purpose would be served in finding whatever need be unlocked, which we don’t have, without first having found the key what unlocks it?
Gibbs: So—We’re going after this key!
Jack: You’re not making any sense at all.
Gibbs: ???
Jack has those possibilities:
To go after the chest. Foolish: you can’t open the chest.
To go after the key. Foolish: you can get caught by Davy Jones.
Gibbs thinks about doing 100% of 1 or 100% of 2 and gets confused when he learns that’s not the plan. Jack thinks about 50% of 1 and 50% of 2: you can go after the chest in order to use it to get the key. Or you can go after the chest and the key “simultaneously” in order to keep Davy Jones distracted and torn between two things.
Braid, Puzzle 1 (“The Ground Beneath Her Feet”). You have two options:
Ignore the platform.
Move the platform.
You need 50% of 1 and 50% of 2: first you ignore the platform, then you move the platform… and rewind time to mix the options.
Braid, Puzzle 2 (“A Tingling”). You have the same two options:
Ignore the platform.
Move the platform.
Now you need 50% of 1 and 25% of 2: you need to rewind time while the platform moves. In this time-manipulating world outcomes may not add up to 100% since you can erase or multiply some of the outcomes/move outcomes from one timeline to another.
Argumentation
You can use the same thing to analyze arguments and opinions. Our opinions are built upon thousands and thousands “false dilemmas” that we haven’t carefully revised.
For example, take a look at those contradicting opinions:
Humans are smart. Sometimes in very non-obvious ways.
Humans are stupid. They make a lot of mistakes.
Usually people think you have to believe either “100% for 1” or “100% for 2″. But you can believe in all kinds of mixes.
For example, I believe in 90% of 1 and 10% of 2: people may be “stupid” in this particular nonsensical world, but in a better world everyone would be a genius.
Ideas as bits
You can treat an idea as a “(quasi)probability distribution” over some levels of a problem/topic. Each detail of the idea gives you a hint about the shape of the distribution. (Each detail is a bit of information.)
We usually don’t analyze information like this. Instead of cautiously updating our understanding with every detail of an idea we do this:
try to grab all details together
get confused (like Gibbs)
throw most of the details out and end up with an obviously wrong understanding.
Note: maybe you can apply the same idea about “bits” to chess (and other games). Each idea and each small advantage you need to come with the winning plan is a “bit” of information/advantage. Before you get enough information/advantage bits the positions looks like a cloud where you don’t see what to do.
Richness of ideas
I think you can measure “richness” of theories (and opinions and anything else) using the same quasiprobabilities/bits. But this measure depends on what you want.
Compare those 2 theories explaining different properties of objects:
(A) Objects have different properties because they have different combinations of “proto properties”.
(B) Objects have different properties because they have different organization of atoms.
Let’s add a metric to compare 2 theories:
Does the theory explain why objects exist in the first place?
Does the theory explain why objects have certain properties?
Let’s say we’re interested in physical objects. B-theory explains properties through 90% of 1 and 10% of 2: it makes properties of objects equivalent to the reason of their existence. A-theory explains properties through 100% of 2. B-theory is more fundamental, because it touches more on a more fundamental topic (existence).
But if we’re interested in mental objects… B-theory explains only 10% of 2 and 0% of 1. And A-theory may be explaining 99% of 1. If our interests are different A-theory turns out to be more fundamental.
When you look for a theory (or opinion or anything else), you can treat any desire and argument as a “bit” that updates the quasiprobabilities like the ones above.
Discussion
We could help each other to find gaps in our thinking! We could do this in this thread.
Gaps of Alignment
I want to explain what I perceive as missed ideas in Alignment. And discuss some other ideas.
(1) You can split possible effects of AI’s actions into three domains. All of them are different (with different ideas), even though they partially intersect and can be formulated in terms of each other. Traditionally we focus on the first two domains:
(Not) accomplishing a goal. “Utility functions” are about this.
(Not) violating human values. “Value learning” is about this.
(Not) modifying a system without breaking it. (Not) doing a task in an obviously meaningless way. “Impact measures” are about this.
I think third domain is mostly ignored and it’s a big blind spot.
I believe that “human (meta-)ethics” is just a subset of a way broader topic: “properties of (any) systems”. And we can translate the method of learning properties of simple systems into a method of learning human values (a complicated system). And we can translate results of learning those simple systems into human moral rules. And many important complicated properties (such as “corrigibility”) has analogies in simple systems.
(2) Another “missed idea”:
Some people analyze human values as a random thing (random utility function).
Some people analyze human values as a result of evolution.
Some analyze human values as a result of people’s childhoods.
Not a lot of people analyze human values as… a result of the way humans experience the world.
“True Love(TM) towards a sentient being” feels fundamentally different from “eating a sandwich”, so it could be evidence that human experiences have an internal structure and that structure plays a big role in determining values. But not a lot of models (or simply 0) take this “fact” into account. Not surprisingly, though: it would require a theory of human subjective experience. But still, can we just ignore this “fact”?
(3) Preference utilitarianism says:
You can describe entire ethics by a (weighted) aggregation of a single microscopic value. This microscopic values is called “preference”.
I think there’s a missed idea: you could try to describe entire ethics by a weighted aggregation of a single… macroscopic value.
(4) Connectionism and Connectivism. I think this is a good example of a gap in our knowledge:
There’s the idea of biological or artificial neurons.
(gap)
There’s the idea that communication between humans is like communication between neurons.
I think one layer of the idea is missing: you could say that concepts in the human mind are somewhat like neurons. Maybe human thinking is like a fractal, looks the same on all levels.
(5) Bayesian probability. There’s an idea:
You can describe possible outcomes (microscopic things) in terms of each other. Using Bayes’ rule.
I think this idea should have a “counterpart”: maybe you can describe macroscopic things in terms of each other. And not only outcomes. Using something somewhat similar to probabilistic reasoning, to Bayes’ rule.
That’s what I tried to do in this post.