Sorry if this sounds naive, but why try to frame knowledge this way? It seems like you’re jumping through a lot of hoops so you can define it in terms of decision theory primitives but it doesn’t seem a great fit.
I am confused about decision theory, and entering into a new ontology is a way to maybe look at in a new way and become less confused. This ontology specifically feels promising to me, but that is hard to comunicate.
There is an intuition that if you know what you do, that is because you already decided on your action. However, when you think about proofs, that doesn’t work out and you get logical counterfactuals. This ontology feels closer to telling me why if you know your action, you already decided.
Seperately, I think decision theory has to either be prior to or simultaneous with epistemics. If you live in a world where you have access to magic if and only if you believe that you can use magic, you should believe you can do magic. You cant do that unless decision theory comes before epistemics.
If you live in a world where you have access to magic if and only if you believe that you can use magic, you should believe you can do magic. You cant do that unless decision theory comes before epistemics.
I think decision theory is for situations where the world judges you based on your decision. (We used to call such situations “fair”.) If the world can also judge your internal state, then no decision theory can be optimal, because the world can just punish you for using it.
Do you have any other argument why decision theory should come before epistemics?
I think that spurious counterfactuals come from having a proof of what you do before deciding what you do, (where “before” is in some weird logical time thing)
I think that the justification for having particular epistemics should come from decision/utility theory, like with the complete class theorems.
I think the correct response to Sleeping Beauty is to taboo “belief” and talk about what gambles you should take.
I think that we have to at some point think about how to think about what to think about, which requires the decision system influencing the epistemics.
#2 and #3 just sound like UDT to me, but #1 and #4 are strong. Thank you! I agree that deciding which theorems to prove next is a great use of decision theory, and would love to see people do more with that idea.
Seperately, I think decision theory has to either be prior to or simultaneous with epistemics. If you live in a world where you have access to magic if and only if you believe that you can use magic, you should believe you can do magic. You cant do that unless decision theory comes before epistemics.
I take it you mean to say here you think normative decision theory comes before normative epistemics?
Or are you trying to express a position similar to the one I take but in different language, which is that phenoma come first? I can very much see a way in which this makes sense if we talk about choice as the same thing as intentional experience, especially since from the inside experience feels like making a choice between which possible world (branch) you come to find yourself in.
Yeah, I think I mean normative DT comes before normative epistemics. I guess I have two claims.
The first is that an agent should have its DT system interacting with, or inside its epistemic system in some way. This is opposed to a self-contained epistemic system at the inner core, and a decision system that does stuff based on the results of the epistemic system.
The second is that we are confused about DT and naturalized world models, and I suspect that progress unpacking that confusion can come from abandoning this “epistemics first, decision second” view and working with both at the same time.
Ah, okay, I think that makes a lot of sense. I actually didn’t realize viewing things as epistemics first was normal in decision theory, although now that I think about it the way the model moves complexity into the agent to avoid dealing with it naturally is going to cause it to leave questions of where knowledge comes from underaddressed.
As I stated above, I think a choice first approach is also sensible because it allows you to work with something fundamental, choice/interaction/phenomena, rather than something that is created by agents, knowledge. Look forward to where you go with this. Feel free to reach out if you want to discuss this more, as I think you are bumping into things I’ve just had to go through dealing with from a different perspective to make progress in my own work, but there is likely more to be learned there.
Sorry if this sounds naive, but why try to frame knowledge this way? It seems like you’re jumping through a lot of hoops so you can define it in terms of decision theory primitives but it doesn’t seem a great fit.
I am confused about decision theory, and entering into a new ontology is a way to maybe look at in a new way and become less confused. This ontology specifically feels promising to me, but that is hard to comunicate.
There is an intuition that if you know what you do, that is because you already decided on your action. However, when you think about proofs, that doesn’t work out and you get logical counterfactuals. This ontology feels closer to telling me why if you know your action, you already decided.
Seperately, I think decision theory has to either be prior to or simultaneous with epistemics. If you live in a world where you have access to magic if and only if you believe that you can use magic, you should believe you can do magic. You cant do that unless decision theory comes before epistemics.
I think decision theory is for situations where the world judges you based on your decision. (We used to call such situations “fair”.) If the world can also judge your internal state, then no decision theory can be optimal, because the world can just punish you for using it.
Do you have any other argument why decision theory should come before epistemics?
I think that spurious counterfactuals come from having a proof of what you do before deciding what you do, (where “before” is in some weird logical time thing)
I think that the justification for having particular epistemics should come from decision/utility theory, like with the complete class theorems.
I think the correct response to Sleeping Beauty is to taboo “belief” and talk about what gambles you should take.
I think that we have to at some point think about how to think about what to think about, which requires the decision system influencing the epistemics.
#2 and #3 just sound like UDT to me, but #1 and #4 are strong. Thank you! I agree that deciding which theorems to prove next is a great use of decision theory, and would love to see people do more with that idea.
“I think that we have to at some point think about how to think about what to think about”
My inner Eliezer is screaming at me about ultrafinite recursion.
I take it you mean to say here you think normative decision theory comes before normative epistemics?
Or are you trying to express a position similar to the one I take but in different language, which is that phenoma come first? I can very much see a way in which this makes sense if we talk about choice as the same thing as intentional experience, especially since from the inside experience feels like making a choice between which possible world (branch) you come to find yourself in.
Yeah, I think I mean normative DT comes before normative epistemics. I guess I have two claims.
The first is that an agent should have its DT system interacting with, or inside its epistemic system in some way. This is opposed to a self-contained epistemic system at the inner core, and a decision system that does stuff based on the results of the epistemic system.
The second is that we are confused about DT and naturalized world models, and I suspect that progress unpacking that confusion can come from abandoning this “epistemics first, decision second” view and working with both at the same time.
See also my response to cousin_it.
Ah, okay, I think that makes a lot of sense. I actually didn’t realize viewing things as epistemics first was normal in decision theory, although now that I think about it the way the model moves complexity into the agent to avoid dealing with it naturally is going to cause it to leave questions of where knowledge comes from underaddressed.
As I stated above, I think a choice first approach is also sensible because it allows you to work with something fundamental, choice/interaction/phenomena, rather than something that is created by agents, knowledge. Look forward to where you go with this. Feel free to reach out if you want to discuss this more, as I think you are bumping into things I’ve just had to go through dealing with from a different perspective to make progress in my own work, but there is likely more to be learned there.