If it helps: it’s a fairly common belief in this community that a general-purpose optimization tool is both far superior to, and more interesting to talk about, than a variety of special-purpose tools.
Of course, that doesn’t mean you have to be interested in general-purpose optimization tools; if you’re more interested in decision theory for dinner-menu or economic planners, by all means post about that if you have something to say.
But I suspect there are relatively few communities in which “why are you all so interested in such a stupid and uninteresting topic?” will get you much community approval, and this isn’t one of them.
I’m interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! “questions”.
Also, there’s no reason that I’ve seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?
Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.
I’m having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don’t seem to obviously “solve” questions like “what should we have for lunch?”. I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let’s do the hard ones. And, I think evidence for this assumption is very lacking.
Again, if you are interested in more discussion about automated optimization on the level of “what should we have for lunch?” I encourage you to post about it; I suspect a lot of other people are interested as well.
If it helps: it’s a fairly common belief in this community that a general-purpose optimization tool is both far superior to, and more interesting to talk about, than a variety of special-purpose tools.
Of course, that doesn’t mean you have to be interested in general-purpose optimization tools; if you’re more interested in decision theory for dinner-menu or economic planners, by all means post about that if you have something to say.
But I suspect there are relatively few communities in which “why are you all so interested in such a stupid and uninteresting topic?” will get you much community approval, and this isn’t one of them.
I’m interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! “questions”.
Also, there’s no reason that I’ve seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?
Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.
I’m having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don’t seem to obviously “solve” questions like “what should we have for lunch?”. I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let’s do the hard ones. And, I think evidence for this assumption is very lacking.
That’s fair.
Again, if you are interested in more discussion about automated optimization on the level of “what should we have for lunch?” I encourage you to post about it; I suspect a lot of other people are interested as well.
Yeah, I might, but here I was just surprised by the down-voting for contrary opinion. It seems like the thing we ought to foster not hide.
As I tried to express in the first place, I suspect what elicited the disapproval was not the contrary opinion, but the rudeness.
Sorry. It didn’t seem rude to me. I’m just frustrated with where I see folks spending their time.
My apologies to anyone who was offended.