yes, this is a retrospective example. once I already know what happens; I can say that a toaster makes bread into toast. If you start to make predictive examples; things get more complicated as you have mentioned.
It still helps to have an understanding of what you don’t know. And in the case of AI; an understanding of what you are deciding not to know (for now) can help you consider the risk involved in playing with AI of unclear potential.
i.e.
AI with defined CEV → what happens next → humans are fine.
seems like a bad idea to expect a good outcome from. Now maybe we can work on a better process for defining CEV.
yes, this is a retrospective example. once I already know what happens; I can say that a toaster makes bread into toast. If you start to make predictive examples; things get more complicated as you have mentioned.
It still helps to have an understanding of what you don’t know. And in the case of AI; an understanding of what you are deciding not to know (for now) can help you consider the risk involved in playing with AI of unclear potential.
i.e. AI with defined CEV → what happens next → humans are fine. seems like a bad idea to expect a good outcome from. Now maybe we can work on a better process for defining CEV.