“topics about which philosophy is still concerned because we don’t or can’t get information that would enable us to have sufficient certainty of answers to allow those topics to transition into science”.
I think that is quite close. I mean the implicit assumptions behind all these discussions, which are unquestioned. Moral realism, Computationalism, Empiricism, and Reductionism all come to mind. These topics cannot be tested or falsified with the scientific method.
but there’s not really anything here that seems like an argument that would convince anyone who didn’t already agree
I thought it would be best to try even if I am not confident it will make any impact on people reading it. My attempt is, like you rightly said, trying to get AI safety researchers to take philosophy more seriously. Most people see it as a past time that they can enjoy for intrinsic pleasure. In my opinion there is a lot of utility if we practiced going more meta until we could see the underpinnings of both the problem of x risk and the solution.
Some of the utility comes from being able to communicate it to more diverse people at higher fidelity. The rest comes from empowering existing researchers to maybe make a breakthrough in alignment itself.
A lot of these objects like values, and goals seem to exist strongly in our ontology. I would like to see people try and question these things, consider other possibilities.
This exchange between Connor and Joscha seems to be an example where Connor clearly is irritated at the question because it is trying to use philosophy to question if we should even both saving humanity, is humans bad by our own standards. I can understand how he feels completely. But notice how Joscha seems to seriously think the philosophy of what values we have and how they are justified are very important.
In this community it seems to taken as fact that the direction we align the AI towards is something to be considered after figuring out how to set the direction in anyway whatsoever. We have decoupled these two things. I would like to question these assumptions, and because I am not smart enough maybe others can also try. This needs us to unsee the boundaries we are so used to and be very careful which ones we put down.
In particular, they might unlearn it in narrow contexts related to their immediate work, but then get confused and fail to unlearn it in general, resulting in them getting confused about things like agency and free will.
Yeah, I was hoping to draw attention to this problem with my post. I love the embedded agency comic series. Yeah, the cartesian boundary is one of such boundaries which most of us have but again if we want to think about alignment honestly, I think it is worthwhile to train to unsee that too.
I will check out your book. I hope to also maybe write something that can help people grok monoism and other philosophical ideas they might want to consider in its entirety.
I think that is quite close. I mean the implicit assumptions behind all these discussions, which are unquestioned. Moral realism, Computationalism, Empiricism, and Reductionism all come to mind. These topics cannot be tested or falsified with the scientific method.
I thought it would be best to try even if I am not confident it will make any impact on people reading it. My attempt is, like you rightly said, trying to get AI safety researchers to take philosophy more seriously. Most people see it as a past time that they can enjoy for intrinsic pleasure. In my opinion there is a lot of utility if we practiced going more meta until we could see the underpinnings of both the problem of x risk and the solution.
Some of the utility comes from being able to communicate it to more diverse people at higher fidelity. The rest comes from empowering existing researchers to maybe make a breakthrough in alignment itself.
A lot of these objects like values, and goals seem to exist strongly in our ontology. I would like to see people try and question these things, consider other possibilities.
This exchange between Connor and Joscha seems to be an example where Connor clearly is irritated at the question because it is trying to use philosophy to question if we should even both saving humanity, is humans bad by our own standards. I can understand how he feels completely. But notice how Joscha seems to seriously think the philosophy of what values we have and how they are justified are very important.
In this community it seems to taken as fact that the direction we align the AI towards is something to be considered after figuring out how to set the direction in anyway whatsoever. We have decoupled these two things. I would like to question these assumptions, and because I am not smart enough maybe others can also try. This needs us to unsee the boundaries we are so used to and be very careful which ones we put down.
Yeah, I was hoping to draw attention to this problem with my post. I love the embedded agency comic series. Yeah, the cartesian boundary is one of such boundaries which most of us have but again if we want to think about alignment honestly, I think it is worthwhile to train to unsee that too.
I will check out your book. I hope to also maybe write something that can help people grok monoism and other philosophical ideas they might want to consider in its entirety.