Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein’s Philosophical Investigations?
Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what’s really going on, you end up in the same morass where AI research and modern philosophy are stuck in.
I haven’t read anything besides overviews of (or takes on) Wittgenstein, but if you think it’s worthwhile I’ll definitely give it a shot.
I can’t say that I’m familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?
I really am mostly just anxious not to waste my time on things that have been done before and failed.
I can’t say that I’m familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?
I read “37 ways...”. Thanks. I think I understand what you mean now.
I think those would definitely be the sorts of problems I would run into if I was to do this via a Philosophy PHD (something I’ve thought about, but don’t think I would be very likely to pursue) or in building an AI algorithm.
I think they are problems that I would need to be cognizant of, but I think I have a workaround that still lets me create something valuable, but maybe not something that would satisfy philosophers.
The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.
I’m not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.
Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein’s Philosophical Investigations?
Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what’s really going on, you end up in the same morass where AI research and modern philosophy are stuck in.
Thanks for the reply
I haven’t read anything besides overviews of (or takes on) Wittgenstein, but if you think it’s worthwhile I’ll definitely give it a shot.
I can’t say that I’m familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?
I really am mostly just anxious not to waste my time on things that have been done before and failed.
You might want to take a look at the A Human’s Guide to Words sequence. (Or, for a summary, see just the last post in that sequence: 37 Ways That Words Can Be Wrong.)
I read “37 ways...”. Thanks. I think I understand what you mean now.
I think those would definitely be the sorts of problems I would run into if I was to do this via a Philosophy PHD (something I’ve thought about, but don’t think I would be very likely to pursue) or in building an AI algorithm.
I think they are problems that I would need to be cognizant of, but I think I have a workaround that still lets me create something valuable, but maybe not something that would satisfy philosophers.
The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.
I’m not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.