Well, yeah. This sequence is more to establish what the actual ideal is that should be approximated, rather than “what’s the best approximation given computational limits” (which is a real question. I don’t deny that.)
As I mentioned in another comment, I originally was considering naming this sequence “How Not to be Stupid (given unbounded computational resources)” And actually, the name was more due to the nature of primary argument that the math will be built on. (ie, “don’t automatically lose”)
I probably should have stated that explicitly somewhere. If nothing else, think of it also as separation of levels. This is describing what an agent should do, rather than what it should think about when it’s doing the doing. It’s more “whatever computations it does, the net outcome should be this behavior”
I’m not sure what you mean by “a first approximation theory of values”. My goal is basically to give a step by step construction of Bayesian decision theory and epistemic probabilities. ie, an argument for “why this math is the Right Way”, not “these are the things that it is moral to value”
Well, uncomputable is a whole lot worse than approximating due to computational limits. You can’t even generally solve the problem with unbounded resources, the best you can hope for is in some narrow domains of application.
If you make it much broader, you are guaranteed halting problem type situations where you won’t be able to consistently assign preferences.
Theory of values (i.e. model of how people do or should value or decide things) very interchangable with what have you. First approximation meaning that I know you’re keeping the math simple, so I don’t expect you to delve too deeply into exactly how you should value or prefer things in order to achieve goals more effectively, including details like human cognition limits, faulty information, communication, faulty goals, and the list goes on.
I just feel that talking about Bayesian decision theory as the right way is reasonable, but it’s more the beginning of the story, rather than the end.
Well, yeah. This sequence is more to establish what the actual ideal is that should be approximated, rather than “what’s the best approximation given computational limits” (which is a real question. I don’t deny that.)
As I mentioned in another comment, I originally was considering naming this sequence “How Not to be Stupid (given unbounded computational resources)” And actually, the name was more due to the nature of primary argument that the math will be built on. (ie, “don’t automatically lose”)
I probably should have stated that explicitly somewhere. If nothing else, think of it also as separation of levels. This is describing what an agent should do, rather than what it should think about when it’s doing the doing. It’s more “whatever computations it does, the net outcome should be this behavior”
I’m not sure what you mean by “a first approximation theory of values”. My goal is basically to give a step by step construction of Bayesian decision theory and epistemic probabilities. ie, an argument for “why this math is the Right Way”, not “these are the things that it is moral to value”
Well, uncomputable is a whole lot worse than approximating due to computational limits. You can’t even generally solve the problem with unbounded resources, the best you can hope for is in some narrow domains of application.
If you make it much broader, you are guaranteed halting problem type situations where you won’t be able to consistently assign preferences.
Theory of values (i.e. model of how people do or should value or decide things) very interchangable with what have you. First approximation meaning that I know you’re keeping the math simple, so I don’t expect you to delve too deeply into exactly how you should value or prefer things in order to achieve goals more effectively, including details like human cognition limits, faulty information, communication, faulty goals, and the list goes on.
I just feel that talking about Bayesian decision theory as the right way is reasonable, but it’s more the beginning of the story, rather than the end.