The super-class for learning algorithms that treat the problem as a black box. At each step they change the policy, and get a fitness value by invoking the FitnessEvaluator (provided as first argument upon initialization).
Evaluable objects can be lists or arrays of continuous values (also wrapped in ParameterContainer) or subclasses of Evolvable (that define its methods).
Bases: pybrain.optimization.optimizer.BlackBoxOptimizer
A more restricted class of black-box optimization algorithms that assume the parameters to be necessarily an array of continuous values (which can be wrapped in a ParameterContainer).
Stochastic hill-climbing always moves to a better point, but may also go to a worse point with a probability that decreases with increasing drop in fitness (and depends on a temperature parameter).
A new version of NES, using the exact instead of the approximate Fisher Information Matrix, as well as a number of other improvements. (GECCO 2009).
Basic finite difference method.
Bases: pybrain.optimization.finitedifference.fd.FiniteDifferences
Policy Gradients with Parameter Exploration (ICANN 2008).
Bases: pybrain.optimization.finitedifference.fd.FiniteDifferences
Simultaneous Perturbation Stochastic Approximation.
This class uses SPSA in general, but uses the likelihood gradient and a simpler exploration decay.
Particle Swarm Optimization
size determines the number of particles.
boundaries should be a list of (min, max) pairs with the length of the dimensionality of the vector to be optimized (default: +-10). Particles will be initialized with a position drawn uniformly in that interval.
memory indicates how much the velocity of a particle is affected by its previous best position.
sociality indicates how much the velocity of a particle is affected by its neighbours best position.
inertia is a damping factor.
Standard Genetic Algorithm.
select some of the individuals of the population, taking into account their fitnesses
Returns: | list of selected parents |
---|