Every AI system is trying to minimize error in some way, regardless of the method. While systems have biases and get stuck in local minima, the algorithms used to develop ML models are rational. The machine doesn’t believe in things that it has no evidence for, it doesn’t develop policies that don’t improve reward if RL, training systems stop once the model starts getting better, the machine doesn’t age or ask it’s friends for faulty input data it gets the same data it always gets.
AI systems can be wrong and suboptimal but don’t refuse to learn and can be ever made less wrong.
AI is rationality. Also in practice as humans our meatware is full of hacks. We can’t ever really be rational in the same way we can’t ever be good calculators.
In the end if you want to know what actions to take in the real world, or you want a robotic system to directly take those actions, you need to build an AI system to be rational for you. For all but toy problems you (and all humans alive) are just too stupid.
I’m not sure I parse and agree with the entirety of this comment, but I do think a reason to keep AI intertwined with a rationality forum is that yes, the study of AI is importantly intertwined with the study of rationality (and I think this has been a central thing about the lesswrong discussion areas since the beginning of the site)
Well like I was in San Diego yesterday when Scott Alexander answered our fan questions.
Rationality is knowing what you don’t know, and being very careful to hedge what you do know.
It doesn’t scale. If everyone in the world were rational but we didn’t have the possibility of AI it would be a somewhat better world. But not hugely better. Medical advances would happen at a still glacial pace, just slightly less glacial. Probably the world would mostly look like the better parts of the EU, since someone rational analyzing what government structure works the best in practice would have no choice but to conclude theirs is the current best known.
A lot of what makes medical advances glacial is FDA/EMA regulation. More rationality about how to do regulation would be helpful in advancing medical progress.
These regulations were written in blood. A lot of people have died from either poorly tested experimental treatments or treatments that have no effect.
My implicit assumption is advanced enough AI could fix this because people would be less likely to just mysteriously die from new side effects. This would make experimenting safer. The reasons they would be less likely to die is partly advanced AI could solve the problems with living mockups of human bodies, so you have real pre clinical testing on an actual human body (it’s a mockup so it’s probably a bunch of tissues in separate life support systems). And partly that a more advanced model could react much faster and in more effective ways that human healthcare providers, who know a limited number of things to try and if the patient doesn’t respond they just write down the time of death. Like a human Go player getting a move they don’t know the response to.
Is AI not itself rationality?
Every AI system is trying to minimize error in some way, regardless of the method. While systems have biases and get stuck in local minima, the algorithms used to develop ML models are rational. The machine doesn’t believe in things that it has no evidence for, it doesn’t develop policies that don’t improve reward if RL, training systems stop once the model starts getting better, the machine doesn’t age or ask it’s friends for faulty input data it gets the same data it always gets.
AI systems can be wrong and suboptimal but don’t refuse to learn and can be ever made less wrong.
AI is rationality. Also in practice as humans our meatware is full of hacks. We can’t ever really be rational in the same way we can’t ever be good calculators.
In the end if you want to know what actions to take in the real world, or you want a robotic system to directly take those actions, you need to build an AI system to be rational for you. For all but toy problems you (and all humans alive) are just too stupid.
I’m not sure I parse and agree with the entirety of this comment, but I do think a reason to keep AI intertwined with a rationality forum is that yes, the study of AI is importantly intertwined with the study of rationality (and I think this has been a central thing about the lesswrong discussion areas since the beginning of the site)
Well like I was in San Diego yesterday when Scott Alexander answered our fan questions.
Rationality is knowing what you don’t know, and being very careful to hedge what you do know.
It doesn’t scale. If everyone in the world were rational but we didn’t have the possibility of AI it would be a somewhat better world. But not hugely better. Medical advances would happen at a still glacial pace, just slightly less glacial. Probably the world would mostly look like the better parts of the EU, since someone rational analyzing what government structure works the best in practice would have no choice but to conclude theirs is the current best known.
A lot of what makes medical advances glacial is FDA/EMA regulation. More rationality about how to do regulation would be helpful in advancing medical progress.
These regulations were written in blood. A lot of people have died from either poorly tested experimental treatments or treatments that have no effect.
My implicit assumption is advanced enough AI could fix this because people would be less likely to just mysteriously die from new side effects. This would make experimenting safer. The reasons they would be less likely to die is partly advanced AI could solve the problems with living mockups of human bodies, so you have real pre clinical testing on an actual human body (it’s a mockup so it’s probably a bunch of tissues in separate life support systems). And partly that a more advanced model could react much faster and in more effective ways that human healthcare providers, who know a limited number of things to try and if the patient doesn’t respond they just write down the time of death. Like a human Go player getting a move they don’t know the response to.