″ This [edge] update is then averaged with other user updates to improve the shared model.”
I do not know how that is meant but when I hear the word “average” my alarms always sound.
Instead of a shared NN each device should get multiple slightly different NNs/weights and report back which set was worst/unfit and which best/fittest. Each set/model is a hypothesis and the test in the world is a evolutionary/democratic falsification. Those mutants who fail to satisfy the most customers are dropped.
NNs are a big data approach, tuned by gradient descent. Because NNs are a big data approach, every update is necessarily small (in the mathematical sense of first-order approximations). When updates are small like this, averaging is fine. Especially considering how most neural networks use sigmoid activation functions.
While this averaging approach can’t solve small data problems, it is perfectly suitable to today’s NN applications where things tend to be well-contained, without fat tails. This approach works fine within the traditional problem domain of neural networks.
I came across this:
The New Dawn of AI: Federated Learning
″ This [edge] update is then averaged with other user updates to improve the shared model.”
I do not know how that is meant but when I hear the word “average” my alarms always sound.
Instead of a shared NN each device should get multiple slightly different NNs/weights and report back which set was worst/unfit and which best/fittest.
Each set/model is a hypothesis and the test in the world is a evolutionary/democratic falsification.
Those mutants who fail to satisfy the most customers are dropped.
NNs are a big data approach, tuned by gradient descent. Because NNs are a big data approach, every update is necessarily small (in the mathematical sense of first-order approximations). When updates are small like this, averaging is fine. Especially considering how most neural networks use sigmoid activation functions.
While this averaging approach can’t solve small data problems, it is perfectly suitable to today’s NN applications where things tend to be well-contained, without fat tails. This approach works fine within the traditional problem domain of neural networks.