No, when I say single update, I just mean that the final model can in principle be reached by a single update with the initial gradient. I’m aware that in practice you need more steps to compute the correct delta.
My argument is solely about the initial gradient. It does not point to the minimum SGD would reach, because the initial gradient tries harder to solve common problems, but the SGD-minimum (ideally) solves even rare problems. SGD manages to do this because common problems do not influence later gradients, because they will already be solved.
No, when I say single update,I just mean that the final model can in principle be reached by a single update with the initial gradient.
The final model cannot be reached by a single update with the initial gradient, because in a system of linear equations (i.e. objective is squared error in the system, or anything along those lines), the gradient does not point straight toward the solution. It’s not just a question of computing the correct delta; the initial gradient doesn’t even point in the right direction.
Ok, I should walk through this, there’s multiple people confused about it.
The training process effectively solves the set of (linear) equations
y(n)=f(x(n),θ0)+Δθ⋅dfdθ(x(n),θ0)
There’s one equation for each data point, and one unknown for each dimension of θ.
Key point: solving sets of equations is not what gradient descent does. Gradient descent minimizes a scalar function. In order for gradient descent to solve the problem, we need to transform “solve the set of equations” into “minimize a scalar function”. Typically, we do that by choosing an objective like “minimize the sum of squared errors” or something along those lines—e.g. in this case we’d probably use:
obj(Δθ)=∑n(y(n)−f(x(n),θ0)+Δθ⋅dfdθ(x(n),θ0))2
This is a quadratic function of Δθ (i.e. it’s second-order in Δθ). If we compute the gradient of this function, then we’ll find that it does not point toward the minimum of the function—the gradient only uses first-order (i.e. linear) information. The gradient does not point toward the solution of the original system of equations. Thus: gradient descent will not solve the set of equations in one step.
In order to solve the set of equations in one step, we could use a second-order minimization method like Newton’s (gradient descent is first-order). Key concept: minimizing an objective function means solving the set of equations ∇obj=0; minimization is equivalent to system-solving with one more derivative. So, a first-order method for solving a system is equivalent to a second-order method for minimizing an objective (both are Newton’s method).
Though I was originally confused on a much more basic level, due to superficial reading, jumping to conclusions and not having touched much calculus notation in the last 15 years.
No, when I say single update, I just mean that the final model can in principle be reached by a single update with the initial gradient. I’m aware that in practice you need more steps to compute the correct delta.
My argument is solely about the initial gradient. It does not point to the minimum SGD would reach, because the initial gradient tries harder to solve common problems, but the SGD-minimum (ideally) solves even rare problems. SGD manages to do this because common problems do not influence later gradients, because they will already be solved.
The final model cannot be reached by a single update with the initial gradient, because in a system of linear equations (i.e. objective is squared error in the system, or anything along those lines), the gradient does not point straight toward the solution. It’s not just a question of computing the correct delta; the initial gradient doesn’t even point in the right direction.
Ok, I thought your F(x) was one update step of the gradient of f times ΔΘ away from f. I guess then I just don’t understand the equation.
Ah, I guess I understand now. I was always thinking about an updating of the parameters. But you are talking about adding to the function output.
Ok, I should walk through this, there’s multiple people confused about it.
The training process effectively solves the set of (linear) equations
y(n)=f(x(n),θ0)+Δθ⋅dfdθ(x(n),θ0)
There’s one equation for each data point, and one unknown for each dimension of θ.
Key point: solving sets of equations is not what gradient descent does. Gradient descent minimizes a scalar function. In order for gradient descent to solve the problem, we need to transform “solve the set of equations” into “minimize a scalar function”. Typically, we do that by choosing an objective like “minimize the sum of squared errors” or something along those lines—e.g. in this case we’d probably use:
obj(Δθ)=∑n(y(n)−f(x(n),θ0)+Δθ⋅dfdθ(x(n),θ0))2
This is a quadratic function of Δθ (i.e. it’s second-order in Δθ). If we compute the gradient of this function, then we’ll find that it does not point toward the minimum of the function—the gradient only uses first-order (i.e. linear) information. The gradient does not point toward the solution of the original system of equations. Thus: gradient descent will not solve the set of equations in one step.
In order to solve the set of equations in one step, we could use a second-order minimization method like Newton’s (gradient descent is first-order). Key concept: minimizing an objective function means solving the set of equations ∇obj=0; minimization is equivalent to system-solving with one more derivative. So, a first-order method for solving a system is equivalent to a second-order method for minimizing an objective (both are Newton’s method).
Does this make more sense?
Yes, definitely, thank you!
Though I was originally confused on a much more basic level, due to superficial reading, jumping to conclusions and not having touched much calculus notation in the last 15 years.