I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our “Occamian intuitions”, does not seem like a good use of time. Do you agree?
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you’re doing philosophical reasoning, whereas the sort of thing I’m talking about in my post doesn’t imply a goal of understanding how we’re trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I’m not really saying anything new here, I know—most of Less Wrong is about applying cognitive science to philosophy.)
As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.
I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool’s endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.
I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our “Occamian intuitions”, does not seem like a good use of time. Do you agree?
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you’re doing philosophical reasoning, whereas the sort of thing I’m talking about in my post doesn’t imply a goal of understanding how we’re trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I’m not really saying anything new here, I know—most of Less Wrong is about applying cognitive science to philosophy.)
As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.
I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool’s endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.