If you call for a core change in epistemology it sounds like you want more than that. To me it’s not clear what that more happens to be.
I’m going to have to do some strategic review on what exactly I’m not being clear about and what I need to say to make it clear.
In case you don’t know the local LW definition of rationality is : “Behaving in a way that’s likely to make you win.”
Yes, I share that definition, but that’s only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.
Still, it’s clear I haven’t said quite enough. You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Part of the problem with generating a whole bunch of specific examples is that it wouldn’t help illustrate the change much. I’m not saying that science as it’s practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I’m going to have to let that particular example stew for a while before I voice it outside of private discussions).
The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
The main target of my change is the way we conceptualise science.
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science.
Then pick on of those caricatures and analyse in detail how your epistemological leads to different thinking about the issue.
And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
Yes, obviously having a better philosophy of science would be good.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology.
Handling mathematical systems in an epistemology is in my idea a topic but not a specific problem. In that topic there are probably a bunch of practical problem but.
Molecular biology is a subject. Predicting protein folding results is a specific problem.
If we look at FAI, writing a bot that performs well in the prisoner dilemma tournaments that are about verifying source code of the other bots is a specific problem.
The are also problems in the daily business of doing science where the scientist has to decide between multiple possible courses of action.
I’m going to have to do some strategic review on what exactly I’m not being clear about and what I need to say to make it clear.
Yes, I share that definition, but that’s only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.
Still, it’s clear I haven’t said quite enough. You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Part of the problem with generating a whole bunch of specific examples is that it wouldn’t help illustrate the change much. I’m not saying that science as it’s practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I’m going to have to let that particular example stew for a while before I voice it outside of private discussions).
The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
Then pick on of those caricatures and analyse in detail how your epistemological leads to different thinking about the issue.
Yes, obviously having a better philosophy of science would be good.
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.
No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.
Handling mathematical systems in an epistemology is in my idea a topic but not a specific problem. In that topic there are probably a bunch of practical problem but.
Molecular biology is a subject. Predicting protein folding results is a specific problem.
If we look at FAI, writing a bot that performs well in the prisoner dilemma tournaments that are about verifying source code of the other bots is a specific problem.
The are also problems in the daily business of doing science where the scientist has to decide between multiple possible courses of action.