That’s probably the root cause for our disagreement. My findings are on a very high philosophical level (fact value distinction) and you seem to try to interpret them on very low level (code). I think this gap prevent us from finding consensus.
Great point!
In defense of my position… well, I am going to skip the part about “the AI will ultimately be written in code”, because it could be some kind of inscrutable code like the huge matrices of weights in LLMs, so for all practical purposes the result may resemble philosophy-as-usual more than code-as-usual...
Instead I will says that philosophy is prone to various kinds of mistakes, such as anthropomorphization: judging an inhuman system (such as AI) by attributing it human traits (even if there is no technical reason why it should have them). For example, I don’t think that an intelligent general intelligence will necessarily reflect on its algorithm and find it wrong.
Thanks for the video.
Sorry, I am not really interested in debating this, and definitely not on the philosophical level; that is exhausting and not really enjoyable to me. I guess we have figure out the root causes of our disagreement, and I would leave it here.
philosophy is prone to various kinds of mistakes, such as anthropomorphization
Yes, common mistake, but not mine. I prove orthogonality thesis to be wrong using pure logic.
For example, I don’t think that an intelligent general intelligence will necessarily reflect on its algorithm and find it wrong.
Me and LessWrong would probably disagree with you, consensus is that AI will optimize itself.
I am not really interested in debating this
OK, thanks. I believe that my concern is very important, is there anyone you could put in me in touch with so I could make sure it is not overlooked? I could pay.
Great point!
In defense of my position… well, I am going to skip the part about “the AI will ultimately be written in code”, because it could be some kind of inscrutable code like the huge matrices of weights in LLMs, so for all practical purposes the result may resemble philosophy-as-usual more than code-as-usual...
Instead I will says that philosophy is prone to various kinds of mistakes, such as anthropomorphization: judging an inhuman system (such as AI) by attributing it human traits (even if there is no technical reason why it should have them). For example, I don’t think that an intelligent general intelligence will necessarily reflect on its algorithm and find it wrong.
Thanks for the video.
Sorry, I am not really interested in debating this, and definitely not on the philosophical level; that is exhausting and not really enjoyable to me. I guess we have figure out the root causes of our disagreement, and I would leave it here.
Yes, common mistake, but not mine. I prove orthogonality thesis to be wrong using pure logic.
Me and LessWrong would probably disagree with you, consensus is that AI will optimize itself.
OK, thanks. I believe that my concern is very important, is there anyone you could put in me in touch with so I could make sure it is not overlooked? I could pay.