I’m not sure I parse and agree with the entirety of this comment, but I do think a reason to keep AI intertwined with a rationality forum is that yes, the study of AI is importantly intertwined with the study of rationality (and I think this has been a central thing about the lesswrong discussion areas since the beginning of the site)
Well like I was in San Diego yesterday when Scott Alexander answered our fan questions.
Rationality is knowing what you don’t know, and being very careful to hedge what you do know.
It doesn’t scale. If everyone in the world were rational but we didn’t have the possibility of AI it would be a somewhat better world. But not hugely better. Medical advances would happen at a still glacial pace, just slightly less glacial. Probably the world would mostly look like the better parts of the EU, since someone rational analyzing what government structure works the best in practice would have no choice but to conclude theirs is the current best known.
A lot of what makes medical advances glacial is FDA/EMA regulation. More rationality about how to do regulation would be helpful in advancing medical progress.
These regulations were written in blood. A lot of people have died from either poorly tested experimental treatments or treatments that have no effect.
My implicit assumption is advanced enough AI could fix this because people would be less likely to just mysteriously die from new side effects. This would make experimenting safer. The reasons they would be less likely to die is partly advanced AI could solve the problems with living mockups of human bodies, so you have real pre clinical testing on an actual human body (it’s a mockup so it’s probably a bunch of tissues in separate life support systems). And partly that a more advanced model could react much faster and in more effective ways that human healthcare providers, who know a limited number of things to try and if the patient doesn’t respond they just write down the time of death. Like a human Go player getting a move they don’t know the response to.
I’m not sure I parse and agree with the entirety of this comment, but I do think a reason to keep AI intertwined with a rationality forum is that yes, the study of AI is importantly intertwined with the study of rationality (and I think this has been a central thing about the lesswrong discussion areas since the beginning of the site)
Well like I was in San Diego yesterday when Scott Alexander answered our fan questions.
Rationality is knowing what you don’t know, and being very careful to hedge what you do know.
It doesn’t scale. If everyone in the world were rational but we didn’t have the possibility of AI it would be a somewhat better world. But not hugely better. Medical advances would happen at a still glacial pace, just slightly less glacial. Probably the world would mostly look like the better parts of the EU, since someone rational analyzing what government structure works the best in practice would have no choice but to conclude theirs is the current best known.
A lot of what makes medical advances glacial is FDA/EMA regulation. More rationality about how to do regulation would be helpful in advancing medical progress.
These regulations were written in blood. A lot of people have died from either poorly tested experimental treatments or treatments that have no effect.
My implicit assumption is advanced enough AI could fix this because people would be less likely to just mysteriously die from new side effects. This would make experimenting safer. The reasons they would be less likely to die is partly advanced AI could solve the problems with living mockups of human bodies, so you have real pre clinical testing on an actual human body (it’s a mockup so it’s probably a bunch of tissues in separate life support systems). And partly that a more advanced model could react much faster and in more effective ways that human healthcare providers, who know a limited number of things to try and if the patient doesn’t respond they just write down the time of death. Like a human Go player getting a move they don’t know the response to.