0% found this document useful (0 votes)
52 views

Function

The document describes the MATLAB function fsolve, which solves a system of nonlinear equations. fsolve finds the root of a function F(x)=0 by starting at an initial guess x0 and iterating until the function value is near zero. The function can handle vector-valued functions and can return multiple outputs. User-defined functions must be saved in an M-file with the same name as the function.

Uploaded by

Yalem Sew Yigzaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Function

The document describes the MATLAB function fsolve, which solves a system of nonlinear equations. fsolve finds the root of a function F(x)=0 by starting at an initial guess x0 and iterating until the function value is near zero. The function can handle vector-valued functions and can return multiple outputs. User-defined functions must be saved in an M-file with the same name as the function.

Uploaded by

Yalem Sew Yigzaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

function

Declare function name, inputs, and outputs


collapse all in page

Syntax
 function [y1,...,yN] = myfun(x1,...,xM)
example

Description
example

function [y1,...,yN] = myfun(x1,...,xM) declares a function named myfun that accepts


inputs x1,...,xM and returns outputs y1,...,yN. This declaration statement must be the first
executable line of the function.
Save the function code in a text file with a .m extension. The name of the file should match the name of
the first function in the file. Valid function names begin with an alphabetic character, and can contain
letters, numbers, or underscores.
Files can include multiple local functions or nested functions. Use the end keyword to indicate the end of
each function in a file if:
 Any function in the file contains a nested function
 Any local function in the file uses the end keyword
Otherwise, the end keyword is optional.

Examples
collapse all

Function with One Output


Define a function in a file named average.m that accepts an input vector, calculates the average of the
values, and returns a single result.
function y = average(x)
if ~isvector(x)
error('Input must be a vector')
end
y = sum(x)/length(x);
end

Call the function from the command line.


z = 1:99;
average(z)
ans =
50

Function with Multiple Outputs


Define a function in a file named stat.m that returns the mean and standard deviation of an input vector.
function [m,s] = stat(x)
n = length(x);
m = sum(x)/n;
s = sqrt(sum((x-m).^2/n));
end

Call the function from the command line.


values = [12.7, 45.4, 98.9, 26.6, 53.1];
[ave,stdev] = stat(values)
ave =
47.3400
stdev =
29.4124

Multiple Functions in a File


Define two functions in a file named stat2.m, where the first function calls the second.
function [m,s] = stat2(x)
n = length(x);
m = avg(x,n);
s = sqrt(sum((x-m).^2/n));
end

function m = avg(x,n)
m = sum(x)/n;
end

Function avg is a local function. Local functions are only available to other functions within the same file.
Call function stat2 from the command line.
values = [12.7, 45.4, 98.9, 26.6, 53.1];
[ave,stdev] = stat2(values)
ave =
47.3400
stdev =
29.4124

User defined functions


Function m-files
In MATLAB you can also define your own functions. MATLAB assumes by default
that all functions act on arrays. Therefore, you must keep in mind the rules for array
operations when writing your own functions. You can then combine your own
functions with MATLAB functions. A function definition has to be saved in a
'function file', which is a file with the extension `.m'. The name of the file has to be
the same as the name of the function, and should have the extension `.m'.

Consider the function . In MATLAB we will write a function with the


same name. The definition of the function has to be saved in the file `f.m'. Through the menu
`File New M-file' or by typing edit on the command line, the `MATLAB
Editor/Debugger' is opened. If you now type the lines:
function y = f(x)
y = x.^2 + exp(x);

and you save the file under the name `f.m', then within MATLAB the function is
available.

Some comments on the file above are in order.

 The first line of the file has to contain the word `function'. The variables used are local;
they will not be available in your 'Workspace'.

 If is an array, then becomes an array of function values.


 The semicolon at the end prevents that at every function evaluation unnecessary output
appears on your screen.

Always test the functions you have defined yourself!

You can change the definition of by typing the command edit f in the command
prompt.

Warning: m-files should not be given the name of existing variables or MATLAB functions.
Conversely, after you have defined a function yourself, you should not give variables the same
name as your function, otherwise the function will not work any more (see Exercise 1.25).

Anonymous functions
Sometimes it is more convenient to define your function at the command line. MATLAB
functions produced at the command line are called anonymous functions. For example, consider

again the function . We can create the anonymous function as follows:


>> f = @(x)(x.^2+exp(x))

Here, is the name of the function, @ is the function handle, x is the input argument
and x.^2+exp(x) is the function.

Anonymous functions can have multiple input arguments. The general structure of anonymous
functions is name = @(input arguments) (functions).

The anonymous function will be evaluated in the same way as the function m-files. Thus the

value of will be obtained by the command

>> f(1.2)
ans =

4.7601

Previous Next Up Contents

Esteur 2010-03-22

fsolve
Solve system of nonlinear equations
collapse all in page

Nonlinear system solver

Solves a problem specified by

F(x) = 0

for x, where F(x) is a function that returns a vector value.

x is a vector or a matrix; see Matrix Arguments.

Syntax
 x = fsolve(fun,x0)
example

 x = fsolve(fun,x0,options)
example

 x = fsolve(problem)
example

 [x,fval] = fsolve(___)
example

 [x,fval,exitflag,output] = fsolve(___)
example

 [x,fval,exitflag,output,jacobian] = fsolve(___)

Description
example

x = fsolve(fun,x0) starts at x0 and tries to solve the equations fun(x) = 0, an array of zeros.
example

x = fsolve(fun,x0,options) solves the equations with the optimization options specified in options.
Use optimoptions to set these options.
example

x = fsolve(problem) solves problem, where problem is a structure described in Input Arguments.


Create the problem structure by exporting a problem from Optimization app, as described in Exporting
Your Work.
example

[x,fval] = fsolve(___), for any syntax, returns the value of the objective function fun at the solution x.
example

[x,fval,exitflag,output] = fsolve(___) additionally returns a value exitflag that describes the


exit condition of fsolve, and a structure output with information about the optimization process.
[x,fval,exitflag,output,jacobian] = fsolve(___) returns the Jacobian of fun at the solution x.

Examples
collapse all
Solution of 2-D Nonlinear System
Open This Example

This example shows how to solve two nonlinear equations in two variables. The equations are

Convert the equations to the form .

Write a function that computes the left-hand side of these two equations.
function F = root2d(x)

F(1) = exp(-exp(-x(1)+x(2))) - x(2)*(1+x(1)^2);


F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;

Save this code as a file named root2d.m on your MATLAB® path.


Solve the system of equations starting at the point [0,0].
fun = @root2d;
x0 = [0,0];
x = fsolve(fun,x0)
Equation solved.

fsolve completed because the vector of function values is near zero


as measured by the default value of the function tolerance, and
the problem appears regular as measured by the gradient.
x =

0.3931 0.3366

Solution with Nondefault Options


Open This Example

Examine the solution process for a nonlinear system.

Set options to have no display and a plot function that displays the first-order optimality, which should
converge to 0 as the algorithm iterates.
options = optimoptions('fsolve','Display','none','PlotFcn',@optimplotfirstorderopt);

The equations in the nonlinear system are

Convert the equations to the form .

Write a function that computes the left-hand side of these two equations.
function F = root2d(x)

F(1) = exp(-exp(-x(1)+x(2))) - x(2)*(1+x(1)^2);


F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;

Save this code as a file named root2d.m on your MATLAB® path.


Solve the nonlinear system starting from the point [0,0] and observe the solution process.
fun = @root2d;
x0 = [0,0];
x = fsolve(fun,x0,options)
x =

0.3931 0.3366
Solve a Problem Structure
Open This Example

Create a problem structure for fsolve and solve the problem.


Solve the same problem as in Solution with Nondefault Options, but formulate the problem using a
problem structure.
Set options for the problem to have no display and a plot function that displays the first-order optimality,
which should converge to 0 as the algorithm iterates.
problem.options =
optimoptions('fsolve','Display','none','PlotFcn',@optimplotfirstorderopt);

The equations in the nonlinear system are

Convert the equations to the form .

Write a function that computes the left-hand side of these two equations.
function F = root2d(x)

F(1) = exp(-exp(-x(1)+x(2))) - x(2)*(1+x(1)^2);


F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;
Save this code as a file named root2d.m on your MATLAB® path.
Create the remaining fields in the problem structure.
problem.objective = @root2d;
problem.x0 = [0,0];
problem.solver = 'fsolve';

Solve the problem.


x = fsolve(problem)
x =

0.3931 0.3366

Solution Process of Nonlinear System


This example returns the iterative display showing the solution process for the system of two equations
and two unknowns
−x −x
2x1−x2−x1+2x2=e 1
=e 2
.

Rewrite the equations in the form F(x) = 0:


−x −x
2x1−x2−e 1
−x1+2x2−e 2
=0=0.

Start your search for a solution at x0 = [-5 -5].


First, write a file that computes F, the values of the equations at x.
function F = myfun(x)
F = [2*x(1) - x(2) - exp(-x(1));
-x(1) + 2*x(2) - exp(-x(2))];

Save this function file as myfun.m on your MATLAB® path.


Set up the initial point. Set options to return iterative display.
x0 = [-5;-5];
options = optimoptions('fsolve','Display','iter');

Call fsolve.
[x,fval] = fsolve(@myfun,x0,options)
Norm of First-order Trust-region
Iteration Func-count f(x) step optimality radius
0 3 23535.6 2.29e+004 1
1 6 6001.72 1 5.75e+003 1
2 9 1573.51 1 1.47e+003 1
3 12 427.226 1 388 1
4 15 119.763 1 107 1
5 18 33.5206 1 30.8 1
6 21 8.35208 1 9.05 1
7 24 1.21394 1 2.26 1
8 27 0.016329 0.759511 0.206 2.5
9 30 3.51575e-006 0.111927 0.00294 2.5
10 33 1.64763e-013 0.00169132 6.36e-007 2.5

Equation solved.

fsolve completed because the vector of function values is near zero


as measured by the default value of the function tolerance, and
the problem appears regular as measured by the gradient.

x =
0.5671
0.5671

fval =
1.0e-006 *
-0.4059
-0.4059
Examine Matrix Equation Solution
Find a matrix X that satisfies
[ ]
X∗X∗X= 1324 ,
starting at the point x= [1,1;1,1]. Examine the fsolve outputs to see the solution quality and process.
Create an anonymous function that calculates the matrix equation.
fun = @(x)x*x*x - [1,2;3,4];

Set options to turn off the display. Set the initial point x0.
options = optimoptions('fsolve','Display','off');
x0 = ones(2);
Call fsolve and obtain information about the solution process.
[x,fval,exitflag,output] = fsolve(fun,x0,options)
x =

-0.1291 0.8602
1.2903 1.1612

fval =

1.0e-09 *

-0.1621 0.0780
0.1167 -0.0465

exitflag =

output =

iterations: 6
funcCount: 35
algorithm: 'trust-region-dogleg'
firstorderopt: 2.4488e-10
message: 'Equation solved.

fsolve completed because the vector of function...'

The exit flag value 1 indicates that the solution is reliable. To verify this manually, calculate the residual
(sum of squares of fval) to see how close it is to zero.
sum(sum(Fval.*Fval))
ans =
4.8133e-20

This small residual confirms that x is a solution.


fsolve performed 35 function evaluations to find the solution, as you can see in the output structure.
output.funcCount
ans =

35
Related Examples
 Nonlinear Equations with Analytic Jacobian
 Nonlinear Equations with Finite-Difference Jacobian
 Nonlinear Equations with Jacobian
 Nonlinear Equations with Jacobian Sparsity Pattern
 Nonlinear Systems with Constraints
Input Arguments
collapse all
fun — Nonlinear equations to solvefunction handle | function name
Nonlinear equations to solve, specified as a function handle or function name. fun is a function that
accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. The equations to solve
are F = 0 for all components of F. The function fun can be specified as a function handle for a file
x = fsolve(@myfun,x0)

where myfun is a MATLAB function such as


function F = myfun(x)
F = ... % Compute function values at x

fun can also be a function handle for an anonymous function.


x = fsolve(@(x)sin(x.*x),x0);

If the user-defined values for x and F are matrices, they are converted to a vector using linear indexing.
If the Jacobian can also be computed and the Jacobian option is 'on', set by
options = optimoptions('fsolve','SpecifyObjectiveGradient','on')

the function fun must return, in a second output argument, the Jacobian value J, a matrix, at x.
If fun returns a vector (matrix) of m components and x has length n, where n is the length of x0, the
Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i)with respect to x(j). (The
Jacobian J is the transpose of the gradient of F.)
Example: fun = @(x)x*x*x-[1,2;3,4]
Data Types: char | function_handle
x0 — Initial pointreal vector | real array
Initial point, specified as a real vector or real array. fsolve uses the number of elements in and size
of x0 to determine the number and size of variables that fun accepts.
Example: x0 = [1,2,3,4]
Data Types: double
options — Optimization optionsoutput of optimoptions | structure as optimset returns
Optimization options, specified as the output of optimoptions or a structure such as optimset returns.
Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization
Options Reference for detailed information.
Some options are absent from the optimoptions display. These options are listed in italics. For details,
see View Options.

All Algorithms
Algorithm Choose between 'trust-region-dogleg' (default), 'trust-region-reflective', and 'leve
Marquardt parameter λ by setting Algorithm to a cell array such as {'levenberg-marquardt',.00

The Algorithm option specifies a preference for which algorithm to use. It is only a preference because for
of equations cannot be underdetermined; that is, the number of equations (the number of elements of F returne
Similarly, for the trust-region-dogleg algorithm, the number of equations must be the same as the length of x.
selected algorithm is unavailable. For more information on choosing the algorithm, see Choosing the Algorithm
CheckGradients Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. The
Diagnostics Display diagnostic information about the function to be minimized or solved. The choices are 'on' or the def
DiffMaxChange Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.
DiffMinChange Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.
Display Level of display (see Iterative Display):
 'off' or 'none' displays no output.
 'iter' displays output at each iteration, and gives the default exit message.
 'iter-detailed' displays output at each iteration, and gives the technical exit message.
 'final' (default) displays just the final output, and gives the default exit message.
 'final-detailed' displays just the final output, and gives the technical exit message.
FiniteDifferenceStepSize Scalar or vector step size factor for finite differences. When you set FiniteDifferenceStepSize to a v
delta = v.*sign′(x).*max(abs(x),TypicalX);
where sign′(x) = sign(x) except sign′(0) = 1. Central finite differences are

delta = v.*max(abs(x),TypicalX);
Scalar FiniteDifferenceStepSize expands to a vector. The default is sqrt(eps) for forward finite
differences.
FiniteDifferenceType Finite differences, used to estimate gradients, are either 'forward' (default), or 'central' (centered). '
should be more accurate.
The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it co
avoid evaluating at a point outside bounds.
FunctionTolerance Termination tolerance on the function value, a positive scalar. The default is 1e-6. See Tolerances and Stoppi
FunValCheck Check whether objective function values are valid. 'on' displays an error when the objective function returns
default, 'off', displays no error.
MaxFunctionEvaluations Maximum number of function evaluations allowed, a positive integer. The default is 100*numberOfVaria
and Function Counts.
MaxIterations Maximum number of iterations allowed, a positive integer. The default is 400. See Tolerances and Stopping C
OptimalityTolerance Termination tolerance on the first-order optimality, a positive scalar. The default is 1e-6. See First-Order Opt
OutputFcn Specify one or more user-defined functions that an optimization function calls at each iteration, either as a func
default is none ([]). See Output Function.
PlotFcn Plots various measures of progress while the algorithm executes. Select from predefined plots or write your ow
handles. The default is none ([]):
 @optimplotx plots the current point.
 @optimplotfunccount plots the function count.
 @optimplotfval plots the function value.
 @optimplotstepsize plots the step size.
 @optimplotfirstorderopt plots the first-order optimality measure.
For information on writing a custom plot function, see Plot Functions.
SpecifyObjectiveGradient If true, fsolve uses a user-defined Jacobian (defined in fun), or Jacobian information (when using Jaco
If false (default), fsolve approximates the Jacobian using finite differences.
StepTolerance Termination tolerance on x, a positive scalar. The default is 1e-6. See Tolerances and Stopping Criteria.
TypicalX Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the startin
is ones(numberofvariables,1) . fsolve uses TypicalX for scaling finite differences for gradient

The trust-region-dogleg algorithm uses TypicalX as the diagonal terms of a scaling matrix.
UseParallel When true, fsolve estimates gradients in parallel. Disable by setting to the default, false. See Parallel C
Trust-Region-Reflective Algorithm
JacobianMultiplyFcn Function handle for Jacobian multiply function. For large-scale structured problems, this function computes th
or J'*(J*Y) without actually forming J. The function is of the form
W = jmfun(Jinfo,Y,flag)
where Jinfo contains a matrix used to compute J*Y (or J'*Y, or J'*(J*Y)). The first argument Jinfo
objective function fun, for example, in
[F,Jinfo] = fun(x)
Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines whic
 If flag == 0, W = J'*(J*Y).
 If flag > 0, W = J*Y.
 If flag < 0, W = J'*Y.
In each case, J is not formed explicitly. fsolve uses Jinfo to compute the preconditioner. See Passing Ext
any additional parameters jmfun needs.
Note 'SpecifyObjectiveGradient' must be set to true for fsolve to pass Jinfo from fun to
See Minimization with Dense Structured Hessian, Linear Equalities for a similar example.
JacobPattern Sparsity pattern of the Jacobian for finite differencing. Set JacobPattern(i,j) = 1 when fun(i) dep
0. In other words, JacobPattern(i,j) = 1 when you can have ∂fun(i)/∂x(j) ≠ 0.
Use JacobPattern when it is inconvenient to compute the Jacobian matrix J in fun, though you can deter
on x(j). fsolve can approximate J via sparse finite differences when you give JacobPattern.

In the worst case, if the structure is unknown, do not set JacobPattern. The default behavior is as if Jaco
Then fsolve computes a full finite-difference approximation in each iteration. This can be very expensive fo
sparsity structure.
MaxPCGIter Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max
information, see Equation Solving Algorithms.
PrecondBandWidth Upper bandwidth of preconditioner for PCG, a nonnegative integer. The default PrecondBandWidth is In
rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG,
Set PrecondBandWidth to 0 for diagonal preconditioning (upper bandwidth of 0). For some problems, an
iterations.
SubproblemAlgorithm Determines how the iteration step is calculated. The default, 'factorization', takes a slower but more a
Algorithm.
TolPCG Termination tolerance on the PCG iteration, a positive scalar. The default is 0.1.
Levenberg-Marquardt Algorithm
InitDamping Initial value of the Levenberg-Marquardt parameter, a positive scalar. Default is 1e-2. For details, see Levenb
ScaleProblem 'jacobian' can sometimes improve the convergence of a poorly scaled problem. The default is 'none'.
Example: options = optimoptions('fsolve','FiniteDifferenceType','central')
problem — Problem structurestructure
Problem structure, specified as a structure with the following fields:

Field Name Entry


objective Objective function
x0 Initial point for x
solver 'fsolve'
options Options created with optimoptions
The simplest way of obtaining a problem structure is to export the problem from the Optimization app.
Data Types: struct

Output Arguments
collapse all
x — Solutionreal vector | real array
Solution, returned as a real vector or real array. The size of x is the same as the size of x0. Typically, x is
a local solution to the problem when exitflag is positive. For information on the quality of the solution,
see When the Solver Succeeds.
fval — Objective function value at the solutionreal vector
Objective function value at the solution, returned as a real vector. Generally, fval = fun(x).
exitflag — Reason fsolve stoppedinteger
Reason fsolve stopped, returned as an integer.

1 Equation solved. First-order optimality is small.


2 Equation solved. Change in x smaller than the specified tolerance.
3 Equation solved. Change in residual smaller than the specified tolerance.
4 Equation solved. Magnitude of search direction smaller than specified tolerance.
0 Number of iterations exceeded options.MaxIterations or number of function evaluations exceededoptions.MaxFunctionEval
-1 Output function or plot function stopped the algorithm.
-2 Equation not solved. The exit message can have more information.
-3 Equation not solved. Trust region radius became too small (trust-region-dogleg algorithm).
output — Information about the optimization processstructure
Information about the optimization process, returned as a structure with fields:

iterations Number of iterations taken


funcCount Number of function evaluations
algorithm Optimization algorithm used
cgiterations Total number of PCG iterations ('trust-region-reflective' algorithm only)
stepsize Final displacement in x (not in 'trust-region-dogleg')
firstorderopt Measure of first-order optimality
message Exit message
jacobian — Jacobian at the solutionreal matrix
Jacobian at the solution, returned as a real matrix. jacobian(i,j) is the partial derivative of fun(i) with
respect to x(j) at the solution x.

Limitations
 The function to be solved must be continuous.
 When successful, fsolve only gives one root.
 The default trust-region dogleg method can only be used when the system of equations is square, i.e., the
number of equations equals the number of unknowns. For the Levenberg-Marquardt method, the system
of equations need not be square.
 The preconditioner computation used in the preconditioned conjugate gradient part of the trust-region-
reflective algorithm forms JTJ (where J is the Jacobian matrix) before computing the preconditioner;
therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, might lead to a
costly solution process for large problems.

More About
collapse all
Algorithms
The Levenberg-Marquardt and trust-region-reflective methods are based on the nonlinear least-squares
algorithms also used in lsqnonlin. Use one of these methods if the system may not have a zero. The
algorithm still returns a point where the residual is small. However, if the Jacobian of the system is
singular, the algorithm might converge to a point that is not a solution of the system of equations
(see Limitations).
 By default fsolve chooses the trust-region dogleg algorithm. The algorithm is a variant of the Powell
dogleg method described in [8]. It is similar in nature to the algorithm implemented in [7]. See Trust-
Region Dogleg Method.
 The trust-region-reflective algorithm is a subspace trust-region method and is based on the interior-
reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a
large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region
Reflective fsolve Algorithm.
 The Levenberg-Marquardt method is described in references [4], [5], and [6]. See Levenberg-Marquardt
Method.
 Optimization Problem Setup
 Equation Solving Algorithms

References
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to
Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.

[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale
Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224,
1994.

[3] Dennis, J. E. Jr., "Nonlinear Least-Squares," State of the Art in Numerical Analysis, ed. D. Jacobs,
Academic Press, pp. 269-312.

[4] Levenberg, K., "A Method for the Solution of Certain Problems in Least-Squares," Quarterly Applied
Mathematics 2, pp. 164-168, 1944.

[5] Marquardt, D., "An Algorithm for Least-squares Estimation of Nonlinear Parameters," SIAM Journal
Applied Mathematics, Vol. 11, pp. 431-441, 1963.

[6] Moré, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," Numerical Analysis,
ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.

[7] Moré, J. J., B. S. Garbow, and K. E. Hillstrom, User Guide for MINPACK 1, Argonne National
Laboratory, Rept. ANL-80-74, 1980.

[8] Powell, M. J. D., "A Fortran Subroutine for Solving Systems of Nonlinear Algebraic
Equations," Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7, 1970.

You might also like