### Optimization scikit: a gradient-based optimization

Last time, I’ve made a simple example of a gradient-free optimization. Now, I’d like to use the gradient of my function (analytical gradient I’ve computed) to be able to get the global minimum in less iterations.

#### Setting up the optimizer and the cost function

```class Function(object):
def __call__(self, x):
t = numpy.sqrt(numpy.sum(x**2, axis=0))
return -numpy.sinc(t)
t = numpy.sqrt(numpy.sum(x**2, axis=0))
return x / t**2 * (- numpy.cos(numpy.pi*t) + numpy.sinc(t))```

This is how the scikit works. All cost functions are instance of this kind of class. The __call__ method returns the actual cost, and additional methods provide gradient or hessian. Here, only the gradient is required.

Now, I can build the optimizer:

```from scikits.optimization import *

mylinesearch = line_search.GoldenSectionSearch(min_alpha_step = 0.0001, alpha_step = .5)
mycriterion = criterion.criterion(ftol = 0.0000001, iterations_max = 10)```

As I’ve said before, the scikit is built around the separation principle, so what I’ve created does not contain any state. Now, I can create the glue that will steer the optimization:

```myoptimizer = optimizer.StandardOptimizer(function = fun,
step = mystep,
line_search = mylinesearch,
criterion = mycriterion,
x0 = numpy.array((.8, 1.)))```

As for the polytope optimizer, I can begin the optimization by calling the optimize() method.

#### Conclusion

This is the evolution of the cost function during the optimization: