I’m reasonably sure that Greek philosophy, for example, is not stable under reflection: a lot of their ideas about the abstract perfection of numbers vs. material imperfection go away once you understand entropy, the law of large numbers, statistical mechanics, and chaos theory, for example. (FWIW, I thought about this topic way too much a while back when I was a player in a time-travel RPG campaign where I played an extremely smart Hellenistic Neo-Platonist philosopher who had then been comprehensively exposed to modern science and ideas — his belief system started cracking and mutating under the strain, it was fun to play.)
Almost certainly our current philosophy/ethics also includes some unexamined issues. I think as a society we’re may be finally getting close to catching up with the philosophical and moral consequences of understanding Darwinian evolution, and that took us well over century (and as I discuss at length in my sequence AI, Alignment, and Ethics, I don’t think we’ve though much at all about the relationship between evolution and artificial intelligence, which is actually pretty profound: AI is the first intelligence that Darwinian evolution doesn’t apply to). A lot of the remaining fuzziness and agreements-to-disagree in modern philosophy is around topics like minds, consciousness, qualia and ethics (basically the remaining bits of Philosophy that Science hasn’t yet intruded on): as we start building artificial minds and arguing about whether they’re conscious, and make advances in understanding how our own minds work, we may gradually get a a lot more clarity on that — though likely the consequences will presumably again take a generation or two to sink in, unless ASI assistance is involved.
OK thanks, will look some more at your sequence. Note I brought up Greek philosophy as obviously not being stable under reflection with the proof of sqrt(2) being irrational as a simple example, not sure why you are only reasonably sure its not.
It’s hard to make a toy model of something that requires the AI following an extended roughly-graduate-level argument drawing on a wide variety of different fields. I’m optimistic that this may become possible at around the GPT-5 level, but that’s hardly a toy model.
I’m reasonably sure that Greek philosophy, for example, is not stable under reflection: a lot of their ideas about the abstract perfection of numbers vs. material imperfection go away once you understand entropy, the law of large numbers, statistical mechanics, and chaos theory, for example. (FWIW, I thought about this topic way too much a while back when I was a player in a time-travel RPG campaign where I played an extremely smart Hellenistic Neo-Platonist philosopher who had then been comprehensively exposed to modern science and ideas — his belief system started cracking and mutating under the strain, it was fun to play.)
Almost certainly our current philosophy/ethics also includes some unexamined issues. I think as a society we’re may be finally getting close to catching up with the philosophical and moral consequences of understanding Darwinian evolution, and that took us well over century (and as I discuss at length in my sequence AI, Alignment, and Ethics, I don’t think we’ve though much at all about the relationship between evolution and artificial intelligence, which is actually pretty profound: AI is the first intelligence that Darwinian evolution doesn’t apply to). A lot of the remaining fuzziness and agreements-to-disagree in modern philosophy is around topics like minds, consciousness, qualia and ethics (basically the remaining bits of Philosophy that Science hasn’t yet intruded on): as we start building artificial minds and arguing about whether they’re conscious, and make advances in understanding how our own minds work, we may gradually get a a lot more clarity on that — though likely the consequences will presumably again take a generation or two to sink in, unless ASI assistance is involved.
OK thanks, will look some more at your sequence. Note I brought up Greek philosophy as obviously not being stable under reflection with the proof of sqrt(2) being irrational as a simple example, not sure why you are only reasonably sure its not.
Sorry, that’s an example of British understatement. I agree, it plainly isn’t.