I have forgotten
my Password

Or login with:

  • Facebookhttp://facebook.com/
  • Googlehttps://www.google.com/accounts/o8/id
  • Yahoohttps://me.yahoo.com
this unit 9.00
sub units 0.00


Calculates the minimum of a real function with several variables using the simplex method of Nelder and Mead.
Controller: CodeCogs




voidnelderdouble(*f)(double *)[function pointer]
doubleeps = 1E-10
intmaxit = 1000 )
This method performs the minimization of a function with several variables using the downhill simplex method of Nelder and Mead. Required as input is a matrix p whose dim + 1 rows are dim -dimensional vectors which are the vertices of the starting simplex. The algorithm executes until either the desired accuracy eps is achieved or the maximum number of iterations maxit is exceeded. Finally the new minimizing vertices are returned with the variable p, now updated by the algorithm.

Consider the unconstrained optimization problem of maximizing a nonlinear function \inline f(x) for \inline x \in \mathcal{R}^n. A well-known class of methods for solving this problem is direct search, which does not rely on derivative information (either explicitly or implicitly), but employs only function evaluations. One of the most widely used direct search methods for nonlinear unconstrained optimization problems is the Nelder-Mead simplex algorithm, which is implemented with this algorithm. (This simplex algorithm should not be confused with the simplex algorithm of Dantzig for linear programming.) Nelder-Mead's algorithm is parsimonious in the number of function evaluations per iteration, and is often able to find reasonably good solutions quickly. On the other hand, the theoretical underpinnings of the algorithm, such as its convergence properties, are less than satisfactory. We will now focus on the implementation of the Nelder-Mead algorithm.

This method maintains at each iteration a nondegenerate simplex, a geometric figure in n dimensions of nonzero volume that is the convex hull of \inline n + 1 vertices, \inline  x_0, x_1, \ldots, x_n, and their respective function values. In each iteration, new points are computed, along with their function values, to form a new simplex. The algorithm terminates when the function values at the vertices of the simplex satisfy a predetermined condition.

One iteration of the algorithm consists of the following steps
  1. Order. Order and re-label the \inline n+1 vertices as \inline x_0, x_1, \ldots, x_n, such that \inline  f(x_0) \geq f(x_1) \geq \cdots \geq f(x_n). Since we want to maximise, we refer to \inline x_0 as the best vertex or point, to \inline x_n as the worst point, and to \inline  x_{n - 1} as the next-worst point. Let \inline \overline{x} refer to the centroid of the n best points in the vertex (i.e., all vertices except for \inline x_n): \inline \overline{x} = \frac{1}{n} \sum_{i = 0}^{n - 1} x_i.
  2. Reflect. Compute the reflection point \inline x_r, such that \inline x_r = \overline{x} +\alpha (\overline{x} - x_n). Evaluate \inline x_r. If \inline f(x_0) \geq f(x_r) \geq f(x_n), accept the reflected point \inline x_r and terminate the iteration.
  3. Expand. If \inline f(x_r) > f(x_0), compute the expansion point \inline x_e, such that \inline x_e = x_r + \beta (x_r - \overline{x}). If \inline f(x_e) > f(x_r) accept \inline x_e and terminate the iteration; otherwise accept \inline x_r and terminate the iteration.
  4. Contract. If \inline f(x_r) < f(x_{n - 1}), perform a contraction between \inline \overline{x} and \inline x_n, such that \inline x_c = \overline{x} + \zeta (\overline{x} - x_n). If \inline f(x_c) \geq f(x_n) accept \inline x_c and terminate the iteration.
  5. Shrink Simplex. Evaluate f at the n new vertices,

For the four coefficients, the standard values reported in the literature are


The approximated multidimensional point coresponding to the minimum of the function.


  • Jeffrey O. Kephart and Rajarshi Das, "Two-sided learning in an agent economy for information bundles"

Example 1

#include <codecogs/maths/optimization/nelder.h>
#include <iostream>
#include <iomanip>
#include <math.h>
// the number of dimensions
#define N 2
#define ABS(x) ((x) < 0 ? -(x) : (x))
// user-defined function
double f(double *x) {
    double r = sqrt(x[0] * x[0] + x[1] * x[1]);
    return ABS(r) < 1E-12 ? 1 : sin(r) / r;
int main()  
    // allocate array on the heap
    double **P = new double* [N + 1];
    for (int i = 0; i <= N; i++)
        P[i] = new double [N];
    // initialize vertices
    P[0][0] =  1; P[0][1] =  2;
    P[1][0] = -2; P[1][1] = -3;
    P[2][0] =  4; P[2][1] =  2;
    // call minimization routine
    Maths::Optimization::nelder(f, N, P, 1E-8, 500);
    // display results
    std::cout << "Minimization points: " << std::endl;
    for (int i = 0 ; i <= N; i++) {
        for (int j = 0; j < N; j++)
            std::cout << "  " << std::setprecision(10) << P[i][j];
        std::cout << std::endl;
    std::cout << std::endl;
    std::cout << "Best mimimum values:" << std::endl;
    for (int i = 0; i <= N; i++)
        std::cout << " " << std::setprecision(10) << f(P[i]) << std::endl;
    // free array from the heap
    for (int i = 0; i <= N; i++)
        delete [] P[i];
    delete [] P;
    return 0;
Minimization points: 
  4.122685894  1.786915332
  4.16647736  1.682698175
  4.142454409  1.741175873
Best mimimum values:


fthe user-defined function with several variables
dimthe dimension of the array
pthe starting simplex array
epsDefault value = 1E-10
maxitDefault value = 1000


Lucian Bentea (August 2005)
Source Code

Source code is available when you buy a Commercial licence.

Not a member, then Register with CodeCogs. Already a Member, then Login.