Intuitionistic logic can be interpreted as the logic of finite verification.
Truth in intuitionistic logic is just provability. If you assert A, it means you have a proof of A. If you assert ¬A then you have a proof that A implies a contradiction. If you assert A ⇒B then you can produce a proof of B from A. If you assert A ∨ B then you have a proof of at least one of A or B. Note that the law of excluded middle fails here because we aren’t allowing sentences A ∨ ¬A where you have no proof of A or that A implies a contradiction.
In all cases, the assertion of a formula must correspond to a proof, proofs being (of course) finite. Using this idea of finite verification is a nice way to develop topology for computer science and formal epistemology (see Topology via Logic by Steven Vickers). Computer science is concerned with verification as proofs and programmes (and the Curry-Howard isomorphism comes in handy there), and formal epistemology is concerned with verification as observations and scientific modeling.
That isn’t exactly a specific example, but a class of examples. Research on this is currently very active.
Intuitionistic logic can be interpreted as the logic of finite verification.
I don’t care of how it can be interpreted but whether it’s useful. I asked for a practical example. Something useful for guiding real world actions. Maybe an application in biology or physics.
If you want to be “pragmatic” then it makes sense to look at whether your philosophy actually is applicable to real world problems.
I think there’s a bit of a misunderstanding going on here, though, because I am perfectly okay with people using classical logic if they like. Classical logic is a great way to model circuits, for example, and it provides some nice reasoning heuristics.There’s nothing in my position that commits us to abandoning it entirely in favour of intuitionistic logic.
Intuitionistic logic is applicable to at least three real-world problems: formulating foundations for math, verifying programmes, and computerised theorem-proving. The last two in particular will have applications in everything from climate modeling to population genetics to quantum field theory.
If one wants to understand an abstract principle it’s very useful to illustrate the principle with concrete practical examples.
I don’t think that anyone on LW think that we shouldn’t a few constructivist mathematicians around to do their job and make a few proof that advance mathematics. I don’t really care about how the proofs of the math I use are derived provided I can trust them.
If you call for a core change in epistemology it sounds like you want more than that. To me it’s not clear what that more happens to be.
In case you don’t know the local LW definition of rationality is : “Behaving in a way that’s likely to make you win.”
If you call for a core change in epistemology it sounds like you want more than that. To me it’s not clear what that more happens to be.
I’m going to have to do some strategic review on what exactly I’m not being clear about and what I need to say to make it clear.
In case you don’t know the local LW definition of rationality is : “Behaving in a way that’s likely to make you win.”
Yes, I share that definition, but that’s only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.
Still, it’s clear I haven’t said quite enough. You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Part of the problem with generating a whole bunch of specific examples is that it wouldn’t help illustrate the change much. I’m not saying that science as it’s practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I’m going to have to let that particular example stew for a while before I voice it outside of private discussions).
The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
The main target of my change is the way we conceptualise science.
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science.
Then pick on of those caricatures and analyse in detail how your epistemological leads to different thinking about the issue.
And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
Yes, obviously having a better philosophy of science would be good.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology.
Handling mathematical systems in an epistemology is in my idea a topic but not a specific problem. In that topic there are probably a bunch of practical problem but.
Molecular biology is a subject. Predicting protein folding results is a specific problem.
If we look at FAI, writing a bot that performs well in the prisoner dilemma tournaments that are about verifying source code of the other bots is a specific problem.
The are also problems in the daily business of doing science where the scientist has to decide between multiple possible courses of action.
Intuitionistic logic can be interpreted as the logic of finite verification.
Truth in intuitionistic logic is just provability. If you assert A, it means you have a proof of A. If you assert ¬A then you have a proof that A implies a contradiction. If you assert A ⇒B then you can produce a proof of B from A. If you assert A ∨ B then you have a proof of at least one of A or B. Note that the law of excluded middle fails here because we aren’t allowing sentences A ∨ ¬A where you have no proof of A or that A implies a contradiction.
In all cases, the assertion of a formula must correspond to a proof, proofs being (of course) finite. Using this idea of finite verification is a nice way to develop topology for computer science and formal epistemology (see Topology via Logic by Steven Vickers). Computer science is concerned with verification as proofs and programmes (and the Curry-Howard isomorphism comes in handy there), and formal epistemology is concerned with verification as observations and scientific modeling.
That isn’t exactly a specific example, but a class of examples. Research on this is currently very active.
I don’t care of how it can be interpreted but whether it’s useful. I asked for a practical example. Something useful for guiding real world actions. Maybe an application in biology or physics.
If you want to be “pragmatic” then it makes sense to look at whether your philosophy actually is applicable to real world problems.
I think there’s a bit of a misunderstanding going on here, though, because I am perfectly okay with people using classical logic if they like. Classical logic is a great way to model circuits, for example, and it provides some nice reasoning heuristics.There’s nothing in my position that commits us to abandoning it entirely in favour of intuitionistic logic.
Intuitionistic logic is applicable to at least three real-world problems: formulating foundations for math, verifying programmes, and computerised theorem-proving. The last two in particular will have applications in everything from climate modeling to population genetics to quantum field theory.
As it happens, mathematician Andrej Bauer wrote a much better defence of doing physics with intuitionistic logic than I could have: http://math.andrej.com/2008/08/13/intuitionistic-mathematics-for-physics/
If one wants to understand an abstract principle it’s very useful to illustrate the principle with concrete practical examples.
I don’t think that anyone on LW think that we shouldn’t a few constructivist mathematicians around to do their job and make a few proof that advance mathematics. I don’t really care about how the proofs of the math I use are derived provided I can trust them.
If you call for a core change in epistemology it sounds like you want more than that. To me it’s not clear what that more happens to be. In case you don’t know the local LW definition of rationality is : “Behaving in a way that’s likely to make you win.”
I’m going to have to do some strategic review on what exactly I’m not being clear about and what I need to say to make it clear.
Yes, I share that definition, but that’s only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.
Still, it’s clear I haven’t said quite enough. You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Part of the problem with generating a whole bunch of specific examples is that it wouldn’t help illustrate the change much. I’m not saying that science as it’s practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I’m going to have to let that particular example stew for a while before I voice it outside of private discussions).
The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
Then pick on of those caricatures and analyse in detail how your epistemological leads to different thinking about the issue.
Yes, obviously having a better philosophy of science would be good.
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.
No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.
Handling mathematical systems in an epistemology is in my idea a topic but not a specific problem. In that topic there are probably a bunch of practical problem but.
Molecular biology is a subject. Predicting protein folding results is a specific problem.
If we look at FAI, writing a bot that performs well in the prisoner dilemma tournaments that are about verifying source code of the other bots is a specific problem.
The are also problems in the daily business of doing science where the scientist has to decide between multiple possible courses of action.