Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, [...]
Do “all those who have recently voiced their worries about AI risks” actually believe we live in a simulation in a mathematical universe? (“Or something along these lines...”?)
Do “all those who have recently voiced their worries about AI risks” actually believe we live in a simulation in a mathematical universe? (“Or something along these lines...”?)
Although I don’t know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.
There are whole books now about this topic. What’s missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.
So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don’t see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That’s not enough!
Do “all those who have recently voiced their worries about AI risks” actually believe we live in a simulation in a mathematical universe? (“Or something along these lines...”?)
Although I don’t know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.
There are whole books now about this topic. What’s missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.
So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don’t see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That’s not enough!
I don’t understand how that answers my specific question. Your system 1 may have done a switcheroo on you.