Azzera filtri
Azzera filtri

Solving a system of nonlinear algebraic equations numerically

3 visualizzazioni (ultimi 30 giorni)
I was wondering if there is a function for solving a system of nonlinear algebraic equations numerically. My equations are made inside a loop and the total number of equations are 136 eqs. All of them contain sin and cos fns. The initial conditions are listed in another file. Thanks for any help from you

Risposte (2)

John D'Errico
John D'Errico il 19 Feb 2017
Use fsolve from the optimization toolbox.
Expect a huge number of possible solutions. The result will then of course depend on your initial guess, since any solution is still a solution.
vpasolve is also possible, from the symbolic toolbox, but it will be much slower in general.
Using any solver, there is NO assurance that it will indeed converge to a solution. Again, your starting guess may be important.
If you have more equations than unknowns, then expect NO solution to exist in general. If you have fewer equations than unknowns, then there may be infinitely many solutions, all of which lie on a manifold in some high dimensional space. Again, any solution will be totally dependent on the starting guess.
If you have exactly as many equations as unknowns, then since trig functions are involved, it will often be the case that infinitely many distinct solutions exist.

Walter Roberson
Walter Roberson il 19 Feb 2017
I would add to what John said by noting that it is not uncommon for there to be no easily determinable exact solution when there are so many variables; solvers might well give up without finding a solution.
Part of the problem can be round-off error: due to floating point limitations, something that is numerically a solution for one of the equations might not be numerically a solution for another equation, but theoretically there might be a solution in the area (you just might not be able to tell for sure without calculating to a few thousand digits.)
Because of this, it can valuable to rephrase an solution to multiple equations as instead being a minimization of a least-squared system. You can look for that somewhat directly with lsqnonlin, which takes a vector of functions. That can work very well for some systems, but with higher dimensions it is not uncommon for it to pick out something that is not close to the real minima, and then you are left with no options.
So you can transform the system of functions into a single objective function to be minimized, after which you can experiment with the several minimizers, including fmincon, patternsearch, ga, particleswarm, or simulated annealing, perhaps in conjunction with the tools of the global optimization toolbox such as multistart .
The transform you would use for the system
f1(x) = y1, f2(x) = y2, f3(x) = y3
would be
obj = @(x) (f1(x)-y1).^2 + (f2(x)-y2).^2 + (f3(x)-y3).^2
and you would minimize that -- so you would be minimizing the violations of equality. You can apply weights, if some of the functions are more important than others.
A system such as this will have a lot of local minima.
If you have built your terms symbolically, then you can expand() the objective, and then start examining how the terms are used together. Sometimes you will be able to determine that a variable appears independently of the others -- the partial derivatives of the objective with respect to that variable and the other variables is uniformly 0. In that case, you can set all of the other variables to arbitrary constants and do a single variable optimization (possibly even by calculus); and having found that optimum, substitute the constant in to the overall expression and reduce the number of free variables by 1.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by