I have forgotten
my Password

Or login with:

  • Facebookhttp://facebook.com/
  • Googlehttps://www.google.com/accounts/o8/id
  • Yahoohttps://me.yahoo.com

gradient

Calculates the minimum of a multidimensional real function using the steepest descent method.
Controller: CodeCogs

Private project under development, to help contact the author: Contact Controller

Interface

C++

Gradient

 
std::vector<double>gradientdouble(*f)(const std::vector<double> &)[function pointer]
std::vector<double>x
doublexk = 1
doubleeps = 1E-10
intmaxit = 1000 )
This routine finds the local extrema of a multidimensional user-defined function f using the steepest descent or gradient method. An initial guess is required in the form of a multidimensional point, coded as an array of coordinates x. The algorithm executes until either the desired accuracy <em> eps </em> is achieved or the maximum number of iterations <em> maxit </em> is exceeded.

The gradient method performs the minimization of a function \inline  f(\vec{x}) of n variables \inline  \vec{x} = (x_1, x_2, \ldots , x_n) whose partial derivatives are accessible. Let us now define the gradient of a function, let \inline  z = f(\vec{x}) be a function of \inline  \vec{x} such that \inline  \frac {\partial f(\vec{x})} {\partial x_k} exists for \inline  k = 1, 2, \ldots, n. The gradient of \inline  f(\vec{x}), denoted by \inline  \nabla f(\vec{x}), is the vector

Recall that the gradient vector in the above equation points locally in the direction of the greatest rate of increase of \inline  f(\vec{x}). Hence \inline  - \nabla f(\vec{x}) points locally in the direction of greatest decrease \inline  f(\vec{x}). Start at the point \inline  \vec{P_0} and search along the line through \inline  \vec{P_0} in the direction \inline  \vec{S_0} = - \nabla f(\vec{P_0}) / ||- \nabla f(\vec{P_0}) ||. You will arrive at a point \inline  \vec{P_1}, where a local minimum occurs when the point \inline  \vec{x} is constrained to lie on the line \inline  \vec{x} = \vec{P_1} + \alpha \vec{S_0}. Since partial derivatives are accessible, the minimization process can be executed using either the quadratic or cubic approximation method. Next we compute \inline  - \nabla f(\vec{P_1}) and move in the search direction \inline  \vec{S_1} = - \nabla f(\vec{P_1}) / || - \nabla f(\vec{P_1}) ||. You will come to \inline  \vec{P_2}, where a local minimum occurs when \inline  \vec{x} is constrained to lie on the line \inline  \vec{x} = \vec{P_1} + \nabla \vec{S_1}. Iteration will produce a sequence, \inline  \left( \vec{P_k} \right)_{k \geq 0}, of points with the property

If

then \inline  f(\vec{P}) will be a local minimum of \inline  f(\vec{x}).

The outline of the steepest descent or gradient method is described below (suppose that \inline  \vec{P_k} has been obtained).

1) Evaluate the gradient vector \inline  \nabla f(\vec{P_k})

2) Compute the search direction \inline  \vec{S_k} = - \nabla f(\vec{P_k}) / || - \nabla f(\vec{P_k}) ||

3) Perform a single parameter minimization of \inline  \Phi(\alpha) = f(\vec{P_k} + \alpha \vec{S_k}) on the interval \inline  [0, b], where b is large. This will produce a value \inline  \alpha = h_{min} where a local minimum is found for \inline  \Phi(\alpha). The relation \inline  \Phi(h_{min}) = f(\vec{P_k} + h_{min} \vec{S_k}) shows that this is a minimum for \inline  f(\vec{x}) along the search line \inline  \vec{x} = \vec{P_k} + \alpha \vec{S_k}.

4) Construct the next point \inline  \vec{P_{k + 1}} = \vec{P_k} + h_{min} \vec{S_k}.

5) Perform the termination test for minimization, i.e. are the function values \inline  f(\vec{P_k}) and \inline  f(\vec{P_{k + 1}}) sufficiently close and the distance \inline  || \vec{P_{k + 1}} - \vec{P_k} || small enough ?

6) Repeat the process from step 1.

References:

Example 1

#include <codecogs/maths/optimization/gradient.h>
 
#include <iostream>
#include <iomanip>
#include <cmath>
 
double f(const std::vector<double> &x) {
    return sin(x[0]) + 2 * cos(x[1]) - sin(x[2]);
}
 
int main() {
    // initial point
    double x[3] = { 1, 1, 1 };
 
    std::vector<double> v(x, x + 3), result = Maths::Optimization::gradient(f, v);
 
    std::cout << "The results are: " << std::endl << std::endl;
    for (int i = 0; i < result.size(); i++) {
        std::cout << "     x(" << i << ") = ";
        std::cout << std::setprecision(10) << result[i] << std::endl;
  }
 
    std::cout << std::endl << "Local maximum found = " << f(result) << std::endl;
    return 0;
}
Output
The results are:
     x(0) = 1.570795457
     x(1) = 6.423712373e-006
     x(2) = 4.712391906
 
Local maximum found = 4

Parameters

fthe user-defined multidimensional function
xthe initial multidimensional point coded as a C++ vector object
xkDefault value = 1
epsDefault value = 1E-10
maxitDefault value = 1000

Returns

The approximated multidimensional point coresponding to the minimum of the function.

Authors

Lucian Bentea (August 2005)
Source Code

This module is private, for owner's use only.

Not a member, then Register with CodeCogs. Already a Member, then Login.