Nothing needs to be justified in the philosophical sense of the term.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would just walk away from a more detailed answer than “It’s our best guess right now.”, if they were certain such an answer existed.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would just walk away from a more detailed answer than “It’s our best guess right now.”, if they were certain such an answer existed.